Ollama is a lightweight, open-source tool for running and managing LLM models. Track your Ollama model calls with AgentOps.

1

Install the AgentOps SDK

pip install agentops ollama
2

Install the Ollama SDK

pip install ollama
3

Add 3 lines of code

Make sure to call agentops.init before calling any openai, cohere, crew, etc models.

import agentops
import ollama

agentops.init(<INSERT YOUR API KEY HERE>)
agentops.start_session()

ollama.pull("<MODEL NAME>")

response = ollama.chat(model='mistral',
  messages=[{
      'role': 'user',
      'content': 'What are the benefits of using AgentOps for monitoring LLMs?',
  }]
)
print(response['message']['content'])
...
# End of program (e.g. main.py)
agentops.end_session("Success")

Set your API key as an .env variable for easy access.

# Alternatively, you can set the API key as an environment variable
AGENTOPS_API_KEY=<YOUR API KEY>

Read more about environment variables in Advanced Configuration

4

Run your Agent

Execute your program and visit app.agentops.ai/drilldown to observe your Agent! šŸ•µļø

After your run, AgentOps prints a clickable url to console linking directly to your session in the Dashboard

Full Examples

import ollama
import agentops

agentops.init(<INSERT YOUR API KEY HERE>)

ollama.pull("<MODEL NAME>")
response = ollama.chat(
    model="<MODEL NAME>",
    max_tokens=1024,
    messages=[{
        "role": "user",
        "content": "Write a haiku about AI and humans working together"
    }]
)

print(response['message']['content'])
agentops.end_session('Success')