AgentOps provides seamless integration with LiteLLM, allowing you to automatically track all your LLM API calls across different providers through a unified interface.

Installation

pip install agentops litellm

Setting Up API Keys

Before using LiteLLM with AgentOps, you need to set up your API keys. You can obtain:

  • Provider API Keys: From your chosen LLM provider (OpenAI, Anthropic, Google, etc.)
  • AGENTOPS_API_KEY: From your AgentOps Dashboard

Then to set them up, you can either export them as environment variables or set them in a .env file.

export OPENAI_API_KEY="your_openai_api_key_here"
export ANTHROPIC_API_KEY="your_anthropic_api_key_here"
export AGENTOPS_API_KEY="your_agentops_api_key_here"

Then load the environment variables in your Python code:

from dotenv import load_dotenv
import os

# Load environment variables from .env file
load_dotenv()

# Set up environment variables with fallback values
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")
os.environ["ANTHROPIC_API_KEY"] = os.getenv("ANTHROPIC_API_KEY")
os.environ["AGENTOPS_API_KEY"] = os.getenv("AGENTOPS_API_KEY")

Usage

The simplest way to integrate AgentOps with LiteLLM is to set up the success_callback.

import litellm
from litellm import completion

# Configure LiteLLM to use AgentOps
litellm.success_callback = ["agentops"]

# Make completion requests with LiteLLM
response = completion(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Hello, how are you?"}]
)

print(response.choices[0].message.content)

Examples

import litellm
from litellm import completion

# Configure LiteLLM to use AgentOps
litellm.success_callback = ["agentops"]

# Make a streaming completion request
response = completion(
    model="gpt-4",
    messages=[{"role": "user", "content": "Write a short poem about AI."}],
    stream=True
)

# Process the streaming response
for chunk in response:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="", flush=True)
print()  # Add a newline at the end

More Examples

For more information on integrating AgentOps with LiteLLM, refer to the LiteLLM documentation on AgentOps integration.

AgentOps provides seamless integration with LiteLLM, allowing you to automatically track all your LLM API calls across different providers through a unified interface.

Installation

pip install agentops litellm

Setting Up API Keys

Before using LiteLLM with AgentOps, you need to set up your API keys. You can obtain:

  • Provider API Keys: From your chosen LLM provider (OpenAI, Anthropic, Google, etc.)
  • AGENTOPS_API_KEY: From your AgentOps Dashboard

Then to set them up, you can either export them as environment variables or set them in a .env file.

export OPENAI_API_KEY="your_openai_api_key_here"
export ANTHROPIC_API_KEY="your_anthropic_api_key_here"
export AGENTOPS_API_KEY="your_agentops_api_key_here"

Then load the environment variables in your Python code:

from dotenv import load_dotenv
import os

# Load environment variables from .env file
load_dotenv()

# Set up environment variables with fallback values
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")
os.environ["ANTHROPIC_API_KEY"] = os.getenv("ANTHROPIC_API_KEY")
os.environ["AGENTOPS_API_KEY"] = os.getenv("AGENTOPS_API_KEY")

Usage

The simplest way to integrate AgentOps with LiteLLM is to set up the success_callback.

import litellm
from litellm import completion

# Configure LiteLLM to use AgentOps
litellm.success_callback = ["agentops"]

# Make completion requests with LiteLLM
response = completion(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Hello, how are you?"}]
)

print(response.choices[0].message.content)

Examples

import litellm
from litellm import completion

# Configure LiteLLM to use AgentOps
litellm.success_callback = ["agentops"]

# Make a streaming completion request
response = completion(
    model="gpt-4",
    messages=[{"role": "user", "content": "Write a short poem about AI."}],
    stream=True
)

# Process the streaming response
for chunk in response:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="", flush=True)
print()  # Add a newline at the end

More Examples

For more information on integrating AgentOps with LiteLLM, refer to the LiteLLM documentation on AgentOps integration.