LiteLLM Example
Using LiteLLM with AgentOps for observing LLM calls
View Notebook on Github
AgentOps for observing LiteLLM
We can use AgentOps to observe LiteLLM, a lightweight library for working with large language models. This integration allows you to monitor and log the performance of your LiteLLM applications, providing insights into their behavior and efficiency. LiteLLM integration extends observability to the different agent libraries which rely on LiteLLM and hence make it possible to observe the agents built using these libraries.
First let’s install the required packages.
Installation
Setup
Then import them.
Next, we’ll set our API keys. There are several ways to do this, the code below is just the most foolproof way for the purposes of this notebook. It accounts for both users who use environment variables and those who just want to set the API Key here in this notebook.
-
Create an environment variable in a
.env
file or other method. By default, the AgentOpsinit()
function will look for an environment variable namedAGENTOPS_API_KEY
. Or… -
Replace
<your_agentops_key>
below and pass in the optionalapi_key
parameter to the AgentOpsinit(api_key=...)
function. Remember not to commit your API key to a public repo!
LiteLLM allows you to use several models including from OpenAI, Llama, Mistral, Claude, Gemini, Gemma, Dall-E, Whisper, and more all using the OpenAI format. To use a different model all you need to change are the API KEY for that provider and the model name in litellm.completion(model="...")
.
Initialize AgentOps:
Making an LLM Call
Note: AgentOps requires that you call LiteLLM completions differently than the LiteLLM’s docs mention. Instead of doing this:
You should do this:
End the AgentOps trace: