Ollama Python library
Go to file
Parth Sareen f25834217b Pydantic Fixes and Tests (#311)
* Added SubscriptableBaseModel to the Model classes and added Image codec test

---------

Co-authored-by: Parth Sareen <parth@Parths-MacBook-Pro.local>
2024-11-08 10:02:55 -08:00
.github Bump actions/upload-artifact from 3 to 4 2024-03-27 17:59:04 +00:00
examples add insert support to generate endpoint (#215) 2024-07-18 11:04:17 -07:00
ollama Pydantic Fixes and Tests (#311) 2024-11-08 10:02:55 -08:00
tests Pydantic Fixes and Tests (#311) 2024-11-08 10:02:55 -08:00
.gitignore add .gitignore 2023-12-20 15:54:51 -08:00
LICENSE initial commit 2023-12-20 12:09:49 -08:00
poetry.lock pydantic types 2024-11-08 09:59:03 -08:00
pyproject.toml pydantic types 2024-11-08 09:59:03 -08:00
README.md update docs 2024-09-12 17:10:29 -07:00
requirements.txt pydantic types 2024-11-08 09:59:03 -08:00

Ollama Python Library

The Ollama Python library provides the easiest way to integrate Python 3.8+ projects with Ollama.

Install

pip install ollama

Usage

import ollama
response = ollama.chat(model='llama3.1', messages=[
  {
    'role': 'user',
    'content': 'Why is the sky blue?',
  },
])
print(response['message']['content'])

Streaming responses

Response streaming can be enabled by setting stream=True, modifying function calls to return a Python generator where each part is an object in the stream.

import ollama

stream = ollama.chat(
    model='llama3.1',
    messages=[{'role': 'user', 'content': 'Why is the sky blue?'}],
    stream=True,
)

for chunk in stream:
  print(chunk['message']['content'], end='', flush=True)

API

The Ollama Python library's API is designed around the Ollama REST API

Chat

ollama.chat(model='llama3.1', messages=[{'role': 'user', 'content': 'Why is the sky blue?'}])

Generate

ollama.generate(model='llama3.1', prompt='Why is the sky blue?')

List

ollama.list()

Show

ollama.show('llama3.1')

Create

modelfile='''
FROM llama3.1
SYSTEM You are mario from super mario bros.
'''

ollama.create(model='example', modelfile=modelfile)

Copy

ollama.copy('llama3.1', 'user/llama3.1')

Delete

ollama.delete('llama3.1')

Pull

ollama.pull('llama3.1')

Push

ollama.push('user/llama3.1')

Embed

ollama.embed(model='llama3.1', input='The sky is blue because of rayleigh scattering')

Embed (batch)

ollama.embed(model='llama3.1', input=['The sky is blue because of rayleigh scattering', 'Grass is green because of chlorophyll'])

Ps

ollama.ps()

Custom client

A custom client can be created with the following fields:

  • host: The Ollama host to connect to
  • timeout: The timeout for requests
from ollama import Client
client = Client(host='http://localhost:11434')
response = client.chat(model='llama3.1', messages=[
  {
    'role': 'user',
    'content': 'Why is the sky blue?',
  },
])

Async client

import asyncio
from ollama import AsyncClient

async def chat():
  message = {'role': 'user', 'content': 'Why is the sky blue?'}
  response = await AsyncClient().chat(model='llama3.1', messages=[message])

asyncio.run(chat())

Setting stream=True modifies functions to return a Python asynchronous generator:

import asyncio
from ollama import AsyncClient

async def chat():
  message = {'role': 'user', 'content': 'Why is the sky blue?'}
  async for part in await AsyncClient().chat(model='llama3.1', messages=[message], stream=True):
    print(part['message']['content'], end='', flush=True)

asyncio.run(chat())

Errors

Errors are raised if requests return an error status or if an error is detected while streaming.

model = 'does-not-yet-exist'

try:
  ollama.chat(model)
except ollama.ResponseError as e:
  print('Error:', e.error)
  if e.status_code == 404:
    ollama.pull(model)