LlamaIndex is a framework for building context-augmented generative AI applications with LLMs. AgentOps provides observability into your LlamaIndex applications through automatic instrumentation.

1

Install the AgentOps SDK

pip install agentops
2

Install LlamaIndex AgentOps Instrumentation

pip install llama-index-instrumentation-agentops
3

Add 2 lines of code

Make sure to call agentops.init before calling any openai, cohere, crew, etc models.

from llama_index.core import set_global_handler

# NOTE: Feel free to set your AgentOps environment variables (e.g., 'AGENTOPS_API_KEY')
# as outlined in the AgentOps documentation, or pass the equivalent keyword arguments
# anticipated by AgentOps' AOClient as **eval_params in set_global_handler.

set_global_handler("agentops")

Set your API key as an .env variable for easy access.

AGENTOPS_API_KEY=<YOUR API KEY>

Read more about environment variables in Advanced Configuration

4

Run your LlamaIndex application

Execute your program and visit app.agentops.ai/drilldown to observe your LlamaIndex application! 🕵️

After your run, AgentOps prints a clickable url to console linking directly to your session in the Dashboard

Clickable link to session

Usage Pattern

Here’s a simple example of how to use AgentOps with LlamaIndex:

from llama_index.core import set_global_handler
import llama_index.core

# Set the global handler to AgentOps
set_global_handler("agentops")

# Your LlamaIndex application code here
# AgentOps will automatically track LLM calls and other operations

Additional Resources

For more detailed information about LlamaIndex’s observability features and AgentOps integration, check out the LlamaIndex documentation.