AgentOps provides seamless integration with LiteLLM, allowing you to automatically track all your LLM API calls across different providers through a unified interface.

Installation

pip install agentops litellm

Usage

The simplest way to integrate AgentOps with LiteLLM is to set up the success_callback:

import os
from dotenv import load_dotenv
import litellm
from litellm import completion

# Load environment variables
load_dotenv()

# Set API keys for different providers
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")
os.environ["ANTHROPIC_API_KEY"] = os.getenv("ANTHROPIC_API_KEY")
os.environ["AGENTOPS_API_KEY"] = os.getenv("AGENTOPS_API_KEY")

# Configure LiteLLM to use AgentOps
litellm.success_callback = ["agentops"]

# Make completion requests with LiteLLM
response = completion(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Hello, how are you?"}]
)

print(response.choices[0].message.content)

# All LiteLLM API calls are automatically tracked by AgentOps

Streaming Example

AgentOps also tracks streaming requests with LiteLLM:

import os
from dotenv import load_dotenv
import litellm
from litellm import completion

# Load environment variables and set API keys
load_dotenv()
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")
os.environ["AGENTOPS_API_KEY"] = os.getenv("AGENTOPS_API_KEY")

# Configure LiteLLM to use AgentOps
litellm.success_callback = ["agentops"]

# Make a streaming completion request
response = completion(
    model="gpt-4",
    messages=[{"role": "user", "content": "Write a short poem about AI."}],
    stream=True
)

# Process the streaming response
for chunk in response:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="", flush=True)
print()  # Add a newline at the end

Multi-Provider Example

One of LiteLLM’s key features is the ability to switch between providers easily:

import os
from dotenv import load_dotenv
import litellm
from litellm import completion

# Load environment variables and set API keys
load_dotenv()
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")
os.environ["ANTHROPIC_API_KEY"] = os.getenv("ANTHROPIC_API_KEY")
os.environ["AGENTOPS_API_KEY"] = os.getenv("AGENTOPS_API_KEY")

# Configure LiteLLM to use AgentOps
litellm.success_callback = ["agentops"]

# OpenAI request
openai_response = completion(
    model="gpt-4",
    messages=[{"role": "user", "content": "What are the advantages of GPT-4?"}]
)

print("OpenAI Response:", openai_response.choices[0].message.content)

# Anthropic request using the same interface
anthropic_response = completion(
    model="anthropic/claude-3-opus-20240229",
    messages=[{"role": "user", "content": "What are the advantages of Claude?"}]
)

print("Anthropic Response:", anthropic_response.choices[0].message.content)

# All requests across different providers are automatically tracked by AgentOps

Environment Variables

Set your API key as an .env variable for easy access.

AGENTOPS_API_KEY=<YOUR API KEY>
OPENAI_API_KEY=<YOUR OPENAI API KEY>
ANTHROPIC_API_KEY=<YOUR ANTHROPIC API KEY>
# Add any other provider API keys you need

Read more about environment variables in Advanced Configuration

Additional Resources

For more information on integrating AgentOps with LiteLLM, refer to the LiteLLM documentation on AgentOps integration.