Integrations
LiteLLM
Call the latest models using the OpenAI format including: Llama, Mistral, Claude, Gemini, Gemma, Dall-E, Whisper
LiteLLM
From LiteLLM’s docs:
Call 100+ LLMs using the same Input/Output Format
- Translate inputs to provider’s
completion
,embedding
, andimage_generation
endpoints - Consistent output. Text responses will always be available at
['choices'][0]['message']['content']
- Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI)
- Track spend & set budgets per project
LiteLLM also supports many providers
Using AgentOps with LiteLLM
Requires litellm>=1.3.1
AgentOps requires you to make a minor adjustment to how you call LiteLLM.
python
# Do not use LiteLLM like this
# from litellm import completion
# ...
# response = completion(model="claude-3", messages=messages)
# Use LiteLLM like this
import litellm
...
response = litellm.completion(model="claude-3", messages=messages)
# or
response = await litellm.acompletion(model="claude-3", messages=messages)