AgentOps for observing LiteLLM
View Notebook on Github
We can use AgentOps to observe LiteLLM, a lightweight library for working with large language models. This integration allows you to monitor and log the performance of your LiteLLM applications, providing insights into their behavior and efficiency. LiteLLM integration extends observability to the different agent libraries which rely on LiteLLM and hence make it possible to observe the agents built using these libraries.
First let’s install the required packages
Then import them
Next, we’ll set our API keys. There are several ways to do this, the code below is just the most foolproof way for the purposes of this notebook. It accounts for both users who use environment variables and those who just want to set the API Key here in this notebook.
Create an environment variable in a .env file or other method. By default, the AgentOps init()
function will look for an environment variable named AGENTOPS_API_KEY
. Or…
Replace <your_agentops_key>
below and pass in the optional api_key
parameter to the AgentOps init(api_key=...)
function. Remember not to commit your API key to a public repo!
LiteLLM allows you to use several models including from OpenAI, Llama, Mistral, Claude, Gemini, Gemma, Dall-E, Whisper, and more all using the OpenAI format. To use a different model all you need to change are the API KEY and model (litellm.completion(model=”…”)).
Note: AgentOps requires that you call LiteLLM completions differently than the LiteLLM’s docs mention Instead of doing this -
You should do this -