Track and analyze your LiteLLM calls across multiple providers with AgentOps
AgentOps provides seamless integration with LiteLLM, allowing you to automatically track all your LLM API calls across different providers through a unified interface.
The simplest way to integrate AgentOps with LiteLLM is to set up the success_callback:
import osfrom dotenv import load_dotenvimport litellmfrom litellm import completion# Load environment variablesload_dotenv()# Set API keys for different providersos.environ["OPENAI_API_KEY"]= os.getenv("OPENAI_API_KEY")os.environ["ANTHROPIC_API_KEY"]= os.getenv("ANTHROPIC_API_KEY")os.environ["AGENTOPS_API_KEY"]= os.getenv("AGENTOPS_API_KEY")# Configure LiteLLM to use AgentOpslitellm.success_callback =["agentops"]# Make completion requests with LiteLLMresponse = completion( model="gpt-3.5-turbo", messages=[{"role":"user","content":"Hello, how are you?"}])print(response.choices[0].message.content)# All LiteLLM API calls are automatically tracked by AgentOps
AgentOps also tracks streaming requests with LiteLLM:
import osfrom dotenv import load_dotenvimport litellmfrom litellm import completion# Load environment variables and set API keysload_dotenv()os.environ["OPENAI_API_KEY"]= os.getenv("OPENAI_API_KEY")os.environ["AGENTOPS_API_KEY"]= os.getenv("AGENTOPS_API_KEY")# Configure LiteLLM to use AgentOpslitellm.success_callback =["agentops"]# Make a streaming completion requestresponse = completion( model="gpt-4", messages=[{"role":"user","content":"Write a short poem about AI."}], stream=True)# Process the streaming responsefor chunk in response:if chunk.choices[0].delta.content:print(chunk.choices[0].delta.content, end="", flush=True)print()# Add a newline at the end
One of LiteLLM’s key features is the ability to switch between providers easily:
import osfrom dotenv import load_dotenvimport litellmfrom litellm import completion# Load environment variables and set API keysload_dotenv()os.environ["OPENAI_API_KEY"]= os.getenv("OPENAI_API_KEY")os.environ["ANTHROPIC_API_KEY"]= os.getenv("ANTHROPIC_API_KEY")os.environ["AGENTOPS_API_KEY"]= os.getenv("AGENTOPS_API_KEY")# Configure LiteLLM to use AgentOpslitellm.success_callback =["agentops"]# OpenAI requestopenai_response = completion( model="gpt-4", messages=[{"role":"user","content":"What are the advantages of GPT-4?"}])print("OpenAI Response:", openai_response.choices[0].message.content)# Anthropic request using the same interfaceanthropic_response = completion( model="anthropic/claude-3-opus-20240229", messages=[{"role":"user","content":"What are the advantages of Claude?"}])print("Anthropic Response:", anthropic_response.choices[0].message.content)# All requests across different providers are automatically tracked by AgentOps
Set your API key as an .env variable for easy access.
AGENTOPS_API_KEY=<YOUR API KEY>OPENAI_API_KEY=<YOUR OPENAI API KEY>ANTHROPIC_API_KEY=<YOUR ANTHROPIC API KEY># Add any other provider API keys you need