Core Concepts
Understanding the fundamental concepts of AgentOps
The AgentOps SDK Architecture
AgentOps is designed to provide comprehensive monitoring and analytics for AI agent workflows with minimal implementation effort. The SDK follows these key design principles:
Automated Instrumentation
After calling agentops.init()
, the SDK automatically identifies installed LLM providers and instruments their API calls. This allows AgentOps to capture interactions between your code and the LLM providers to collect data for your dashboard without requiring manual instrumentation for every call.
Declarative Tracing with Decorators
The decorators system allows you to add tracing to your existing functions and classes with minimal code changes. Decorators create hierarchical spans that provide a structured view of your agent’s operations for monitoring and analysis.
OpenTelemetry Foundation
AgentOps is built on OpenTelemetry, a widely-adopted standard for observability instrumentation. This provides a robust and standardized approach to collecting, processing, and exporting telemetry data.
Sessions
A Session represents a single user interaction with your agent. When you initialize AgentOps using the init
function, a session is automatically created for you:
By default, all events and API calls will be associated with this session. For more advanced use cases, you can control session creation manually:
Span Hierarchy
In AgentOps, activities are organized into a hierarchical structure of spans:
- SESSION: The root container for all activities in a single execution of your workflow
- AGENT: Represents an autonomous entity with specialized capabilities
- WORKFLOW: A logical grouping of related operations
- OPERATION/TASK: A specific task or function performed by an agent
- LLM: An interaction with a language model
- TOOL: The use of a tool or API by an agent
This hierarchy creates a complete trace of your agent’s execution:
Agents
An Agent represents a component in your application that performs tasks. You can create and track agents using the @agent
decorator:
LLM Events
AgentOps automatically tracks LLM API calls from supported providers, collecting valuable information like:
- Model: The specific model used (e.g., “gpt-4”, “claude-3-opus”)
- Provider: The LLM provider (e.g., “OpenAI”, “Anthropic”)
- Prompt Tokens: Number of tokens in the input
- Completion Tokens: Number of tokens in the output
- Cost: The estimated cost of the interaction
- Messages: The prompt and completion content
Tags
Tags help you organize and filter your sessions. You can add tags when initializing AgentOps or when starting a session:
Host Environment
AgentOps automatically collects basic information about the environment where your agent is running:
- Operating System: The OS type and version
- Python Version: The version of Python being used
- Hostname: The name of the host machine (anonymized)
- SDK Version: The version of the AgentOps SDK being used
Dashboard Views
The AgentOps dashboard provides several ways to visualize and analyze your agent’s performance:
- Session List: Overview of all sessions with filtering options
- Timeline View: Chronological display of spans showing duration and relationships
- Tree View: Hierarchical representation of spans showing parent-child relationships
- Message View: Detailed view of LLM interactions with prompt and completion content
- Analytics: Aggregated metrics across sessions and operations
Putting It All Together
A typical implementation looks like this: