mirror of
https://github.com/ollama/ollama-python.git
synced 2026-01-13 21:57:16 +08:00
|
|
||
|---|---|---|
| .. | ||
| async-chat.py | ||
| async-generate.py | ||
| async-structured-outputs.py | ||
| async-tools.py | ||
| chat-stream.py | ||
| chat-with-history.py | ||
| chat.py | ||
| create.py | ||
| embed.py | ||
| fill-in-middle.py | ||
| generate-stream.py | ||
| generate.py | ||
| list.py | ||
| multimodal-chat.py | ||
| multimodal-generate.py | ||
| ps.py | ||
| pull.py | ||
| README.md | ||
| structured-outputs-image.py | ||
| structured-outputs.py | ||
| tools.py | ||
Running Examples
Run the examples in this directory with:
# Run example
python3 examples/<example>.py
See ollama/docs/api.md for full API documentation
Chat - Chat with a model
- chat.py
- async-chat.py
- chat-stream.py - Streamed outputs
- chat-with-history.py - Chat with model and maintain history of the conversation
Generate - Generate text with a model
- generate.py
- async-generate.py
- generate-stream.py - Streamed outputs
- fill-in-middle.py - Given a prefix and suffix, fill in the middle
Tools/Function Calling - Call a function with a model
- tools.py - Simple example of Tools/Function Calling
- async-tools.py
Multimodal with Images - Chat with a multimodal (image chat) model
Structured Outputs - Generate structured outputs with a model
Ollama List - List all downloaded models and their properties
Ollama ps - Show model status with CPU/GPU usage
Ollama Pull - Pull a model from Ollama
Requirement: pip install tqdm