Track and analyze your LiteLLM calls across multiple providers with AgentOps
AgentOps provides seamless integration with LiteLLM, allowing you to automatically track all your LLM API calls across different providers through a unified interface.
Then load the environment variables in your Python code:
Copy
from dotenv import load_dotenvimport os# Load environment variables from .env fileload_dotenv()# Set up environment variables with fallback valuesos.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")os.environ["ANTHROPIC_API_KEY"] = os.getenv("ANTHROPIC_API_KEY")os.environ["AGENTOPS_API_KEY"] = os.getenv("AGENTOPS_API_KEY")
The simplest way to integrate AgentOps with LiteLLM is to set up the success_callback.
Copy
import litellmfrom litellm import completion# Configure LiteLLM to use AgentOpslitellm.success_callback = ["agentops"]# Make completion requests with LiteLLMresponse = completion( model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello, how are you?"}])print(response.choices[0].message.content)
import litellmfrom litellm import completion# Configure LiteLLM to use AgentOpslitellm.success_callback = ["agentops"]# Make a streaming completion requestresponse = completion( model="gpt-4", messages=[{"role": "user", "content": "Write a short poem about AI."}], stream=True)# Process the streaming responsefor chunk in response: if chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="", flush=True)print() # Add a newline at the end