ollama-python/examples
Parth Sareen d967f048d9
examples: gpt oss browser tool (#588)
---------

Co-authored-by: nicole pardal <nicolepardall@gmail.com>
2025-09-24 15:40:53 -07:00
..
async-chat.py examples: update to use gemma3 (#543) 2025-07-22 16:27:16 -07:00
async-generate.py examples: update to use gemma3 (#543) 2025-07-22 16:27:16 -07:00
async-structured-outputs.py chore: bump ruff and ensure imports are sorted (#385) 2025-01-14 16:34:16 -08:00
async-tools.py types/examples: add tool_name to message and examples (#537) 2025-07-09 14:23:33 -07:00
chat-stream.py examples: update to use gemma3 (#543) 2025-07-22 16:27:16 -07:00
chat-with-history.py examples: update to use gemma3 (#543) 2025-07-22 16:27:16 -07:00
chat.py examples: update to use gemma3 (#543) 2025-07-22 16:27:16 -07:00
create.py examples: update to use gemma3 (#543) 2025-07-22 16:27:16 -07:00
embed.py Examples refactor (#329) 2024-11-21 15:14:59 -08:00
fill-in-middle.py Examples refactor (#329) 2024-11-21 15:14:59 -08:00
generate-stream.py examples: update to use gemma3 (#543) 2025-07-22 16:27:16 -07:00
generate.py examples: update to use gemma3 (#543) 2025-07-22 16:27:16 -07:00
gpt-oss-tools-stream.py examples: make gpt-oss resilient for failed tool calls (#569) 2025-09-02 13:58:36 -07:00
gpt-oss-tools.py examples: make gpt-oss resilient for failed tool calls (#569) 2025-09-02 13:58:36 -07:00
list.py chore: bump ruff and ensure imports are sorted (#385) 2025-01-14 16:34:16 -08:00
multi-tool.py types/examples: add tool_name to message and examples (#537) 2025-07-09 14:23:33 -07:00
multimodal-chat.py examples: update to use gemma3 (#543) 2025-07-22 16:27:16 -07:00
multimodal-generate.py Merge pull request #445 from ollama/mxyng/hatch 2025-05-06 11:03:28 -07:00
ps.py examples: update to use gemma3 (#543) 2025-07-22 16:27:16 -07:00
pull.py examples: update to use gemma3 (#543) 2025-07-22 16:27:16 -07:00
README.md examples: gpt oss browser tool (#588) 2025-09-24 15:40:53 -07:00
show.py types: add capabilities to show response. (#511) 2025-05-14 19:27:48 -07:00
structured-outputs-image.py examples: update to use gemma3 (#543) 2025-07-22 16:27:16 -07:00
structured-outputs.py chore: bump ruff and ensure imports are sorted (#385) 2025-01-14 16:34:16 -08:00
thinking-generate.py Remove unused messages variable from thinking-generate example 2025-05-30 16:58:16 -05:00
thinking-levels.py add support for 'high'/'medium'/'low' think values 2025-08-07 14:39:36 -07:00
thinking.py add support for thinking 2025-05-27 00:35:28 -07:00
tools.py types/examples: add tool_name to message and examples (#537) 2025-07-09 14:23:33 -07:00
web_search_gpt_oss_helper.py examples: gpt oss browser tool (#588) 2025-09-24 15:40:53 -07:00
web-search-gpt-oss.py examples: gpt oss browser tool (#588) 2025-09-24 15:40:53 -07:00
web-search-mcp.py examples: add mcp server for web_search web_crawl (#585) 2025-09-23 21:54:43 -07:00
web-search.py examples: gpt oss browser tool (#588) 2025-09-24 15:40:53 -07:00

Running Examples

Run the examples in this directory with:

# Run example
python3 examples/<example>.py

# or with uv
uv run examples/<example>.py

See ollama/docs/api.md for full API documentation

Chat - Chat with a model

Generate - Generate text with a model

Tools/Function Calling - Call a function with a model

gpt-oss

An API key from Ollama's cloud service is required. You can create one here.

export OLLAMA_API_KEY="your_api_key_here"

MCP server

The MCP server can be used with an MCP client like Cursor, Cline, Codex, Open WebUI, Goose, and more.

uv run examples/web-search-mcp.py

Configuration to use with an MCP client:

{
  "mcpServers": {
    "web_search": {
      "type": "stdio",
      "command": "uv",
      "args": ["run", "path/to/ollama-python/examples/web-search-mcp.py"],
      "env": { "OLLAMA_API_KEY": "your_api_key_here" }
    }
  }
}

Multimodal with Images - Chat with a multimodal (image chat) model

Structured Outputs - Generate structured outputs with a model

Ollama List - List all downloaded models and their properties

Ollama Show - Display model properties and capabilities

Ollama ps - Show model status with CPU/GPU usage

Ollama Pull - Pull a model from Ollama

Requirement: pip install tqdm

Ollama Create - Create a model from a Modelfile

Ollama Embed - Generate embeddings with a model

Thinking - Enable thinking mode for a model

Thinking (generate) - Enable thinking mode for a model

Thinking (levels) - Choose the thinking level