mirror of
https://github.com/ollama/ollama-python.git
synced 2026-02-21 18:14:37 +08:00
| .. | ||
| async-chat.py | ||
| async-generate.py | ||
| async-list.py | ||
| async-ps.py | ||
| async-tools.py | ||
| chat-stream.py | ||
| chat.py | ||
| create.py | ||
| fill-in-middle.py | ||
| generate-stream.py | ||
| generate.py | ||
| list.py | ||
| multimodal.py | ||
| ps.py | ||
| pull-progress.py | ||
| README.md | ||
| tools.py | ||
Running Examples
Run the examples in this directory with:
# Navigate to examples directory
cd examples/
# Run example
python3 <example>.py
Chat
- chat.py - Basic chat with model
- chat-stream.py - Stream chat with model
- async-chat.py - Async chat with model
Generate
- generate.py - Generate text with model
- generate-stream.py - Stream generate text with model
- async-generate.py - Async generate text with model
List
- list.py - List all downloaded models and their properties
- async-list.py - Async list all downloaded models and their properties
Fill in the middle
- fill-in-middle.py - Fill in the middle with model
Multimodal
- multimodal.py - Multimodal chat with model
Pull Progress
Requirement: pip install tqdm
- pull-progress.py - Pull progress with model
Ollama create (create a model)
- create.py - Create a model
Ollama ps (show model status - cpu/gpu usage)
- ollama-ps.py - Ollama ps
Tools/Function Calling
- tools.py - Simple example of Tools/Function Calling
- async-tools.py - Async example of Tools/Function Calling
Configuring Clients
Custom parameters can be passed to the client when initializing:
import ollama
client = ollama.Client(
host='http://localhost:11434',
timeout=10.0, # Default: None
follow_redirects=True, # Default: True
headers={'x-some-header': 'some-value'}
)
Similarly, the AsyncClient class can be configured with the same parameters.