mirror of
https://github.com/ollama/ollama-python.git
synced 2026-01-13 13:47:17 +08:00
| .. | ||
| async-chat.py | ||
| async-generate.py | ||
| async-structured-outputs.py | ||
| async-tools.py | ||
| chat-logprobs.py | ||
| chat-stream.py | ||
| chat-with-history.py | ||
| chat.py | ||
| create.py | ||
| embed.py | ||
| fill-in-middle.py | ||
| generate-logprobs.py | ||
| generate-stream.py | ||
| generate.py | ||
| gpt-oss-tools-stream.py | ||
| gpt-oss-tools.py | ||
| list.py | ||
| multi-tool.py | ||
| multimodal-chat.py | ||
| multimodal-generate.py | ||
| ps.py | ||
| pull.py | ||
| README.md | ||
| show.py | ||
| structured-outputs-image.py | ||
| structured-outputs.py | ||
| thinking-generate.py | ||
| thinking-levels.py | ||
| thinking.py | ||
| tools.py | ||
| web_search_gpt_oss_helper.py | ||
| web-search-gpt-oss.py | ||
| web-search-mcp.py | ||
| web-search.py | ||
Running Examples
Run the examples in this directory with:
# Run example
python3 examples/<example>.py
# or with uv
uv run examples/<example>.py
See ollama/docs/api.md for full API documentation
Chat - Chat with a model
- chat.py
- async-chat.py
- chat-stream.py - Streamed outputs
- chat-with-history.py - Chat with model and maintain history of the conversation
Generate - Generate text with a model
- generate.py
- async-generate.py
- generate-stream.py - Streamed outputs
- fill-in-middle.py - Given a prefix and suffix, fill in the middle
Tools/Function Calling - Call a function with a model
- tools.py - Simple example of Tools/Function Calling
- async-tools.py
- multi-tool.py - Using multiple tools, with thinking enabled
gpt-oss
Web search
An API key from Ollama's cloud service is required. You can create one here.
export OLLAMA_API_KEY="your_api_key_here"
- web-search.py
- web-search-gpt-oss.py - Using browser research tools with gpt-oss
MCP server
The MCP server can be used with an MCP client like Cursor, Cline, Codex, Open WebUI, Goose, and more.
uv run examples/web-search-mcp.py
Configuration to use with an MCP client:
{
"mcpServers": {
"web_search": {
"type": "stdio",
"command": "uv",
"args": ["run", "path/to/ollama-python/examples/web-search-mcp.py"],
"env": { "OLLAMA_API_KEY": "your_api_key_here" }
}
}
}
Multimodal with Images - Chat with a multimodal (image chat) model
Structured Outputs - Generate structured outputs with a model
Ollama List - List all downloaded models and their properties
Ollama Show - Display model properties and capabilities
Ollama ps - Show model status with CPU/GPU usage
Ollama Pull - Pull a model from Ollama
Requirement: pip install tqdm