Ollama Example
Using Ollama with AgentOps
AgentOps Ollama Integration
This example demonstrates how to use AgentOps to monitor your Ollama LLM calls.
First let’s install the required packages
⚠️ Important: Make sure you have Ollama installed and running locally before running this notebook. You can install it from ollama.ai.
Then import them
Next, we’ll set our API keys. For Ollama, we’ll need to make sure Ollama is running locally. Get an AgentOps API key
- Create an environment variable in a .env file or other method. By default, the AgentOps
init()
function will look for an environment variable namedAGENTOPS_API_KEY
. Or… - Replace
<your_agentops_key>
below and pass in the optionalapi_key
parameter to the AgentOpsinit(api_key=...)
function. Remember not to commit your API key to a public repo!
Now let’s make some basic calls to Ollama. Make sure you have pulled the model first, use the following or replace with whichever model you want to use.
Let’s try streaming responses as well
💡 Note: In production environments, you should add proper error handling around the Ollama calls and use
agentops.end_session("Error")
when exceptions occur.
Finally, let’s end our AgentOps session