Ollama is a lightweight, open-source tool for running and managing LLM models. Track your Ollama model calls with AgentOps.
1
Install the AgentOps SDK
Copy
pip install agentops ollama
2
Install the Ollama SDK
Copy
pip install ollama
3
Add 3 lines of code
Make sure to call agentops.init before calling any openai, cohere, crew, etc models.
Copy
import agentopsimport ollamaagentops.init(<INSERT YOUR API KEY HERE>)agentops.start_session()ollama.pull("<MODEL NAME>")response = ollama.chat(model='mistral', messages=[{ 'role': 'user', 'content': 'What are the benefits of using AgentOps for monitoring LLMs?', }])print(response['message']['content'])...# End of program (e.g. main.py)agentops.end_session("Success")
Set your API key as an .env variable for easy access.
Copy
# Alternatively, you can set the API key as an environment variableAGENTOPS_API_KEY=<YOUR API KEY>
import ollamaimport agentopsagentops.init(<INSERT YOUR API KEY HERE>)ollama.pull("<MODEL NAME>")response = ollama.chat( model="<MODEL NAME>", max_tokens=1024, messages=[{ "role": "user", "content": "Write a haiku about AI and humans working together" }])print(response['message']['content'])agentops.end_session('Success')