AgentOps Ollama Integration
This example demonstrates how to use AgentOps to monitor your Ollama LLM calls. First let’s install the required packages⚠️ Important: Make sure you have Ollama installed and running locally before running this notebook. You can install it from ollama.ai.
- Create an environment variable in a .env file or other method. By default, the AgentOps
init()
function will look for an environment variable namedAGENTOPS_API_KEY
. Or… - Replace
<your_agentops_key>
below and pass in the optionalapi_key
parameter to the AgentOpsinit(api_key=...)
function. Remember not to commit your API key to a public repo!
💡 Note: In production environments, you should add proper error handling around the Ollama calls and use agentops.end_session("Error")
when exceptions occur.
Finally, let’s end our AgentOps session