# Core Concepts
Source: https://docs.agentops.ai/v2/concepts/core-concepts
Understanding the fundamental concepts of AgentOps
# The AgentOps SDK Architecture
AgentOps is designed to provide comprehensive monitoring and analytics for AI agent workflows with minimal implementation effort. The SDK follows these key design principles:
## Automated Instrumentation
After calling `agentops.init()`, the SDK automatically identifies installed LLM providers and instruments their API calls. This allows AgentOps to capture interactions between your code and the LLM providers to collect data for your dashboard without requiring manual instrumentation for every call.
## Declarative Tracing with Decorators
The [decorators](/v2/concepts/decorators) system allows you to add tracing to your existing functions and classes with minimal code changes. Decorators create hierarchical spans that provide a structured view of your agent's operations for monitoring and analysis.
## OpenTelemetry Foundation
AgentOps is built on [OpenTelemetry](https://opentelemetry.io/), a widely-adopted standard for observability instrumentation. This provides a robust and standardized approach to collecting, processing, and exporting telemetry data.
# Sessions
A [Session](/v2/concepts/sessions) represents a single user interaction with your agent. When you initialize AgentOps using the `init` function, a session is automatically created for you:
```python theme={null}
import agentops
# Initialize AgentOps with automatic session creation
agentops.init(api_key="YOUR_API_KEY")
```
By default, all events and API calls will be associated with this session. For more advanced use cases, you can control session creation manually:
```python theme={null}
# Initialize without auto-starting a session
agentops.init(api_key="YOUR_API_KEY", auto_start_session=False)
# Later, manually start a session when needed
agentops.start_session(tags=["customer-query"])
```
# Span Hierarchy
In AgentOps, activities are organized into a hierarchical structure of spans:
* **SESSION**: The root container for all activities in a single execution of your workflow
* **AGENT**: Represents an autonomous entity with specialized capabilities
* **WORKFLOW**: A logical grouping of related operations
* **OPERATION/TASK**: A specific task or function performed by an agent
* **LLM**: An interaction with a language model
* **TOOL**: The use of a tool or API by an agent
This hierarchy creates a complete trace of your agent's execution:
```
SESSION
├── AGENT
│ ├── OPERATION/TASK
│ │ ├── LLM
│ │ └── TOOL
│ └── WORKFLOW
│ └── OPERATION/TASK
└── LLM (unattributed to a specific agent)
```
# Agents
An **Agent** represents a component in your application that performs tasks. You can create and track agents using the `@agent` decorator:
```python theme={null}
from agentops.sdk.decorators import agent, operation
@agent(name="customer_service")
class CustomerServiceAgent:
@operation
def answer_query(self, query):
# Agent logic here
pass
```
# LLM Events
AgentOps automatically tracks LLM API calls from supported providers, collecting valuable information like:
* **Model**: The specific model used (e.g., "gpt-4", "claude-3-opus")
* **Provider**: The LLM provider (e.g., "OpenAI", "Anthropic")
* **Prompt Tokens**: Number of tokens in the input
* **Completion Tokens**: Number of tokens in the output
* **Cost**: The estimated cost of the interaction
* **Messages**: The prompt and completion content
```python theme={null}
import agentops
from openai import OpenAI
# Initialize AgentOps
agentops.init(api_key="YOUR_API_KEY")
# Initialize the OpenAI client
client = OpenAI()
# This LLM call is automatically tracked
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "What's the capital of France?"}]
)
```
# Tags
[Tags](/v2/concepts/tags) help you organize and filter your sessions. You can add tags when initializing AgentOps or when starting a session:
```python theme={null}
# Add tags when initializing
agentops.init(api_key="YOUR_API_KEY", tags=["production", "web-app"])
# Or when manually starting a session
agentops.start_session(tags=["customer-service", "tier-1"])
```
# Host Environment
AgentOps automatically collects basic [information](/v2/concepts/host-env) about the environment where your agent is running:
* **Operating System**: The OS type and version
* **Python Version**: The version of Python being used
* **Hostname**: The name of the host machine (anonymized)
* **SDK Version**: The version of the AgentOps SDK being used
# Dashboard Views
The AgentOps dashboard provides several ways to visualize and analyze your agent's performance:
* **Session List**: Overview of all sessions with filtering options
* **Timeline View**: Chronological display of spans showing duration and relationships
* **Tree View**: Hierarchical representation of spans showing parent-child relationships
* **Message View**: Detailed view of LLM interactions with prompt and completion content
* **Analytics**: Aggregated metrics across sessions and operations
# Putting It All Together
A typical implementation looks like this:
```python theme={null}
import agentops
from openai import OpenAI
from agentops.sdk.decorators import agent, operation
# Initialize AgentOps
agentops.init(api_key="YOUR_API_KEY", tags=["production"])
# Define an agent
@agent(name="assistant")
class AssistantAgent:
def __init__(self):
self.client = OpenAI()
@operation
def answer_question(self, question):
# This LLM call will be automatically tracked and associated with this agent
response = self.client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": question}]
)
return response.choices[0].message.content
def workflow():
# Use the agent
assistant = AssistantAgent()
answer = assistant.answer_question("What's the capital of France?")
print(answer)
workflow()
# Session is automatically tracked until application terminates
```
# Decorators
Source: https://docs.agentops.ai/v2/concepts/decorators
Use decorators to track activities in your agent system
## Available Decorators
AgentOps provides the following decorators:
| Decorator | Purpose | Creates |
| ------------ | --------------------------------------------------- | -------------- |
| `@session` | Track an entire user interaction | SESSION span |
| `@agent` | Track agent classes and their lifecycle | AGENT span |
| `@operation` | Track discrete operations performed by agents | OPERATION span |
| `@workflow` | Track a sequence of operations | WORKFLOW span |
| `@task` | Track smaller units of work (similar to operations) | TASK span |
| `@tool` | Track tool usage and cost in agent operations | TOOL span |
| `@guardrail` | Track guardrail input and output | GUARDRAIL span |
## Decorator Hierarchy
The decorators create spans that form a hierarchy:
```
SESSION
├── AGENT
│ ├── OPERATION or TASK
│ │ ├── LLM
│ │ └── TOOL
│ └── WORKFLOW
│ └── OPERATION or TASK
└── AGENT
└── OPERATION or TASK
```
## Using Decorators
### @session
The `@session` decorator tracks an entire user interaction from start to finish:
```python theme={null}
from agentops.sdk.decorators import session
import agentops
# Initialize AgentOps
agentops.init(api_key="YOUR_API_KEY")
@session
def answer_question(question):
# Create and use agents
weather_agent = WeatherAgent()
result = weather_agent.get_forecast(question)
# Return the final result
return result
```
Each `@session` function call creates a new session span that contains all the agents, operations, and workflows used during that interaction.
### @agent
The `@agent` decorator instruments a class to track its lifecycle and operations:
```python theme={null}
from agentops.sdk.decorators import agent, operation
import agentops
# Initialize AgentOps
agentops.init(api_key="YOUR_API_KEY")
@agent
class WeatherAgent:
def __init__(self):
self.api_key = "weather_api_key"
@operation
def get_forecast(self, location):
# Get weather data
return f"The weather in {location} is sunny."
def check_weather(city):
weather_agent = WeatherAgent()
forecast = weather_agent.get_forecast(city)
return forecast
weather_info = check_weather("San Francisco")
```
When an agent-decorated class is instantiated within a session, an AGENT span is created automatically.
### @operation
The `@operation` decorator tracks discrete functions performed by an agent:
```python theme={null}
from agentops.sdk.decorators import agent, operation
import agentops
# Initialize AgentOps
agentops.init(api_key="YOUR_API_KEY")
@agent
class MathAgent:
@operation
def add(self, a, b):
return a + b
@operation
def multiply(self, a, b):
return a * b
def calculate(x, y):
math_agent = MathAgent()
sum_result = math_agent.add(x, y)
product_result = math_agent.multiply(x, y)
return {"sum": sum_result, "product": product_result}
results = calculate(5, 3)
```
Operations represent the smallest meaningful units of work in your agent system. Each operation creates an OPERATION span with:
* Inputs (function arguments)
* Output (return value)
* Duration
* Success/failure status
### @workflow
The `@workflow` decorator tracks a sequence of operations that work together:
```python theme={null}
from agentops.sdk.decorators import agent, operation, workflow
import agentops
# Initialize AgentOps
agentops.init(api_key="YOUR_API_KEY")
@agent
class TravelAgent:
def __init__(self):
self.flight_api = FlightAPI()
self.hotel_api = HotelAPI()
@workflow
def plan_trip(self, destination, dates):
# This workflow contains multiple operations
flights = self.find_flights(destination, dates)
hotels = self.find_hotels(destination, dates)
return {
"flights": flights,
"hotels": hotels
}
@operation
def find_flights(self, destination, dates):
return self.flight_api.search(destination, dates)
@operation
def find_hotels(self, destination, dates):
return self.hotel_api.search(destination, dates)
```
Workflows help you organize related operations and see their collective performance.
### @task
The `@task` decorator is similar to `@operation` but can be used for smaller units of work:
```python theme={null}
from agentops.sdk.decorators import agent, task
import agentops
# Initialize AgentOps
agentops.init(api_key="YOUR_API_KEY")
@agent
class DataProcessor:
@task
def normalize_data(self, data):
# Normalize the data
return [x / sum(data) for x in data]
@task
def filter_outliers(self, data, threshold=3):
# Filter outliers
mean = sum(data) / len(data)
std_dev = (sum((x - mean) ** 2 for x in data) / len(data)) ** 0.5
return [x for x in data if abs(x - mean) <= threshold * std_dev]
```
The `@task` and `@operation` decorators function identically (they are aliases in the codebase), and you can choose the one that best fits your semantic needs.
### @tool
The `@tool` decorator tracks tool usage within agent operations and supports cost tracking. It works with all function types: synchronous, asynchronous, generator, and async generator.
```python theme={null}
from agentops.sdk.decorators import agent, tool
import asyncio
@agent
class ProcessingAgent:
def __init__(self):
pass
@tool(cost=0.01)
def sync_tool(self, item):
"""Synchronous tool with cost tracking."""
return f"Processed {item}"
@tool(cost=0.02)
async def async_tool(self, item):
"""Asynchronous tool with cost tracking."""
await asyncio.sleep(0.1)
return f"Async processed {item}"
@tool(cost=0.03)
def generator_tool(self, items):
"""Generator tool with cost tracking."""
for item in items:
yield self.sync_tool(item)
@tool(cost=0.04)
async def async_generator_tool(self, items):
"""Async generator tool with cost tracking."""
for item in items:
await asyncio.sleep(0.1)
yield await self.async_tool(item)
```
The tool decorator provides:
* Cost tracking for each tool call
* Proper span creation and nesting
* Support for all function types (sync, async, generator, async generator)
* Cost accumulation in generator and async generator operations
### @guardrail
The `@guardrail` decorator tracks guardrail input and output. You can specify the guardrail type (`"input"` or `"output"`) with the `spec` parameter.
```python theme={null}
from agentops.sdk.decorators import guardrail
import agentops
import re
# Initialize AgentOps
agentops.init(api_key="YOUR_API_KEY")
@guardrail(spec="input")
def secret_key_guardrail(input):
pattern = r'\bsk-[a-zA-Z0-9]{10,}\b'
result = True if re.search(pattern, input) else False
return {
"tripwire_triggered" : result
}
```
## Decorator Attributes
You can pass additional attributes to decorators:
```python theme={null}
from agentops.sdk.decorators import agent, operation
import agentops
# Initialize AgentOps
agentops.init(api_key="YOUR_API_KEY")
@agent(name="custom_agent_name", attributes={"version": "1.0"})
class CustomAgent:
@operation(name="custom_operation", attributes={"importance": "high"})
def process(self, data):
return data
```
Common attributes include:
| Attribute | Description | Example |
| ------------ | ------------------------------- | ------------------------------- |
| `name` | Custom name for the span | `name="weather_forecast"` |
| `attributes` | Dictionary of custom attributes | `attributes={"model": "gpt-4"}` |
## Complete Example
Here's a complete example using all the decorators together:
```python theme={null}
from agentops.sdk.decorators import session, agent, operation, workflow, task
import agentops
# Initialize AgentOps
agentops.init(api_key="YOUR_API_KEY")
@session
def assist_user(query):
# Create the main assistant
assistant = Assistant()
# Process the query
return assistant.process_query(query)
@agent
class Assistant:
def __init__(self):
pass
@workflow
def process_query(self, query):
research_agent = ResearchAgent()
writing_agent = WritingAgent()
# Research phase
research = research_agent.gather_information(query)
# Writing phase
response = writing_agent.generate_response(query, research)
return response
@agent
class ResearchAgent:
@operation
def gather_information(self, query):
# Perform web search
search_results = self.search(query)
# Analyze results
return self.analyze_results(search_results)
@task
def search(self, query):
# Simulate web search
return [f"Result for {query}", f"Another result for {query}"]
@task
def analyze_results(self, results):
# Analyze search results
return {"summary": "Analysis of " + ", ".join(results)}
@agent
class WritingAgent:
@operation
def generate_response(self, query, research):
# Generate a response based on the research
return f"Answer to '{query}' based on: {research['summary']}"
assist_user("What is the capital of France?")
```
In this example:
1. The `@session` decorator wraps the entire interaction
2. The `@agent` decorator defines multiple agent classes
3. The `@workflow` decorator creates a workflow that coordinates agents
4. The `@operation` and `@task` decorators track individual operations
5. All spans are properly nested in the hierarchy
Note that LLM and TOOL spans are automatically created when you use compatible LLM libraries or tool integrations.
## Best Practices
* **Use @session for top-level functions** that represent complete user interactions
* **Apply @agent to classes** that represent distinct components of your system
* **Use @operation for significant functions** that represent complete units of work
* **Use @task for smaller functions** that are part of larger operations
* **Apply @workflow to methods** that coordinate multiple operations
* **Keep decorator nesting consistent** with the logical hierarchy of your code
* **Add custom attributes** to provide additional context for analysis
* **Use meaningful names** for all decorated components
## Dashboard Visualization
In the AgentOps dashboard, decorators create spans that appear in:
1. **Timeline View**: Shows the execution sequence and duration
2. **Hierarchy View**: Displays the parent-child relationships
3. **Detail Panels**: Shows inputs, outputs, and attributes
4. **Performance Metrics**: Tracks execution times and success rates
This visualization helps you understand the flow and performance of your agent system.
# Host Environment
Source: https://docs.agentops.ai/v2/concepts/host-env
Automatically collected information about the environment where your agent runs
## Collected Information
The following information is automatically collected:
* **Operating System**: The OS type and version (e.g., "Linux", "Windows", "macOS")
* **Python Version**: The version of Python being used
* **Hostname**: The name of the host machine (anonymized)
* **AgentOps SDK Version**: The version of the AgentOps SDK being used
* **Process ID**: The ID of the process running the agent
## Usage in Analytics
Host environment information enables:
* Identifying environment-specific issues
* Tracking performance across different platforms
* Ensuring compatibility with specific OS versions
* Monitoring SDK version adoption
## Privacy Considerations
AgentOps is designed with privacy in mind:
* No personally identifiable information is collected
* Hostnames are anonymized
* No network scanning or detailed system analysis is performed
* Only the minimum necessary information for debugging is gathered
## Disabling Host Environment Collection
If needed, you can disable host environment collection when initializing AgentOps:
```python theme={null}
import agentops
# Disable host environment collection
agentops.init(api_key="YOUR_API_KEY", env_data_opt_out=True)
```
# Spans
Source: https://docs.agentops.ai/v2/concepts/spans
Understanding the different types of spans in AgentOps
## Core Span Types
AgentOps organizes all spans with specific kinds:
| Span Kind | Description |
| ----------- | ---------------------------------------------------------------------------- |
| `SESSION` | The root container for all activities in a single execution of your workflow |
| `AGENT` | Represents an autonomous entity with specialized capabilities |
| `WORKFLOW` | A logical grouping of related operations |
| `OPERATION` | A specific task or function performed by an agent |
| `TASK` | Alias for OPERATION, used interchangeably |
| `LLM` | An interaction with a language model |
| `TOOL` | The use of a tool or API by an agent |
## Span Hierarchy
Spans in AgentOps are organized hierarchically:
```
SESSION
├── AGENT
│ ├── OPERATION/TASK
│ │ ├── LLM
│ │ └── TOOL
│ └── WORKFLOW
│ └── OPERATION/TASK
└── LLM (unattributed to a specific agent)
```
Every span exists within the context of a session, and most spans (other than the session itself) have a parent span that provides context.
## Span Attributes
All spans in AgentOps include:
* **ID**: A unique identifier
* **Name**: A descriptive name
* **Kind**: The type of span (SESSION, AGENT, etc.)
* **Start Time**: When the span began
* **End Time**: When the span completed
* **Status**: Success or error status
* **Attributes**: Key-value pairs with additional metadata
Different span types have specialized attributes:
### LLM Spans
LLM spans track interactions with large language models and include:
* **Model**: The specific model used (e.g., "gpt-4", "claude-3-opus")
* **Provider**: The LLM provider (e.g., "OpenAI", "Anthropic")
* **Prompt Tokens**: Number of tokens in the input
* **Completion Tokens**: Number of tokens in the output
* **Cost**: The estimated cost of the interaction
* **Messages**: The prompt and completion content
### Tool Spans
Tool spans track the use of tools or APIs and include:
* **Tool Name**: The name of the tool used
* **Input**: The data provided to the tool
* **Output**: The result returned by the tool
* **Duration**: How long the tool operation took
### Operation/Task Spans
Operation spans track specific functions or tasks:
* **Operation Type**: The kind of operation performed
* **Parameters**: Input parameters to the operation
* **Result**: The output of the operation
* **Duration**: How long the operation took
## Creating Spans
There are several ways to create spans in AgentOps:
### Using Decorators
The recommended way to create spans is using decorators:
```python theme={null}
from agentops.sdk.decorators import agent, operation, session, workflow, task
@session
def my_workflow():
agent_instance = MyAgent()
return agent_instance.perform_task()
@agent
class MyAgent:
@operation
def perform_task(self):
# Perform the task
return result
```
### Automatic Instrumentation
AgentOps automatically instruments LLM API calls from supported providers when `auto_instrument=True` (the default):
```python theme={null}
import agentops
from openai import OpenAI
# Initialize AgentOps
agentops.init(api_key="YOUR_API_KEY")
# Initialize the OpenAI client
client = OpenAI()
# This LLM call will be automatically tracked
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
```
## Viewing Spans in the Dashboard
All recorded spans are visible in the AgentOps dashboard:
1. **Timeline View**: Shows the sequence and duration of spans
2. **Tree View**: Displays the hierarchical relationship between spans
3. **Details Panel**: Provides in-depth information about each span
4. **Analytics**: Aggregates statistics across spans
## Best Practices
* Use descriptive names for spans to make them easily identifiable
* Create a logical hierarchy with sessions, agents, and operations
* Record relevant parameters and results for better debugging
* Use consistent naming conventions for span types
* Track costs and token usage to monitor resource consumption
# Tags
Source: https://docs.agentops.ai/v2/concepts/tags
Organize and filter your sessions with customizable tags
## Adding Tags
You can add tags when initializing AgentOps, whicreh is the most common approach:
```python theme={null}
import agentops
# Initialize AgentOps with tags
agentops.init(
api_key="YOUR_API_KEY",
default_tags=["production", "customer-service", "gpt-4"]
)
```
Alternatively, when using manual trace creation:
```python theme={null}
# Initialize without auto-starting a session
agentops.init(api_key="YOUR_API_KEY", auto_start_session=False)
# Later start a trace with specific tags (modern approach)
trace = agentops.start_trace(trace_name="test_workflow", default_tags=["development", "testing", "claude-3"])
```
Legacy approach using `agentops.start_session(default_tags=["development", "testing", "claude-3"])` is deprecated and will be removed in v4.0. Use `agentops.start_trace()` instead.
## Tag Use Cases
Tags can be used for various purposes:
### Environment Identification
Tag sessions based on their environment:
```python theme={null}
default_tags=["production"] # or ["development", "staging", "testing"]
```
### Feature Tracking
Tag sessions related to specific features or components:
```python theme={null}
default_tags=["search-functionality", "user-authentication", "content-generation"]
```
### User Segmentation
Tag sessions based on user characteristics:
```python theme={null}
default_tags=["premium-user", "new-user", "enterprise-customer"]
```
### Experiment Tracking
Tag sessions as part of specific experiments:
```python theme={null}
default_tags=["experiment-123", "control-group", "variant-A"]
```
### Model Identification
Tag sessions with the models being used:
```python theme={null}
default_tags=["gpt-4", "claude-3-opus", "mistral-large"]
```
## Viewing Tagged Sessions
In the AgentOps dashboard:
1. Use the tag filter to select specific tags
2. Combine multiple tags to refine your view
3. Save filtered views for quick access
## Best Practices
* Use a consistent naming convention for tags
* Include both broad categories and specific identifiers
* Avoid using too many tags per session (3-5 is typically sufficient)
* Consider using hierarchical tag structures (e.g., "env:production", "model:gpt-4")
* Update your tagging strategy as your application evolves
# Traces
Source: https://docs.agentops.ai/v2/concepts/traces
Effectively manage traces in your agent workflow
## Automatic Trace Management
The simplest way to create and manage traces is to use the `init` function with automatic trace creation:
```python theme={null}
import agentops
# Initialize with automatic trace creation (default)
agentops.init(api_key="YOUR_API_KEY", default_tags=["production"])
```
This approach:
* Creates a trace automatically when you initialize the SDK
* Tracks all events in the context of this trace
* Manages the trace throughout the lifecycle of your application
## Manual Trace Creation
For more control, you can disable automatic trace creation and start traces manually:
```python theme={null}
import agentops
# Initialize without auto-starting a trace
agentops.init(api_key="YOUR_API_KEY", auto_start_session=False)
# Later, manually start a trace when needed
trace_context = agentops.start_trace(
trace_name="Customer Workflow",
tags=["customer-query", "high-priority"]
)
# End the trace when done
agentops.end_trace(trace_context, end_state="Success")
```
Manual trace management is useful when:
* You want to control exactly when trace tracking begins
* You need to associate different traces with different sets of tags
* Your application has distinct workflows that should be tracked separately
## Using the Trace Decorator
You can use the `@trace` decorator to create a trace for a specific function:
```python theme={null}
import agentops
@agentops.trace
def process_customer_data(customer_id):
# This entire function execution will be tracked as a trace
return analyze_data(customer_id)
# Or with custom parameters
@agentops.trace(name="data_processing", tags=["analytics"])
def analyze_user_behavior(user_data):
return perform_analysis(user_data)
```
## Trace Context Manager
TraceContext objects support Python's context manager protocol, making it easy to manage trace lifecycles:
```python theme={null}
import agentops
# Using trace context as a context manager
with agentops.start_trace("user_session", tags=["web"]) as trace:
# All operations here are tracked within this trace
process_user_request()
# Trace automatically ends when exiting the context
# Success/Error state is set based on whether exceptions occurred
```
## Trace States
Every trace has an associated state that indicates its completion status. AgentOps provides multiple ways to specify trace end states for flexibility and backward compatibility.
### AgentOps TraceState Enum (Recommended)
The recommended approach is to use the `TraceState` enum from AgentOps:
```python theme={null}
from agentops import TraceState
# Available states
agentops.end_trace(trace_context, end_state=TraceState.SUCCESS) # Trace completed successfully
agentops.end_trace(trace_context, end_state=TraceState.ERROR) # Trace encountered an error
agentops.end_trace(trace_context, end_state=TraceState.UNSET) # Trace state is not determined
```
### OpenTelemetry StatusCode
For advanced users familiar with OpenTelemetry, you can use StatusCode directly:
```python theme={null}
from opentelemetry.trace.status import StatusCode
agentops.end_trace(trace_context, end_state=StatusCode.OK) # Same as TraceState.SUCCESS
agentops.end_trace(trace_context, end_state=StatusCode.ERROR) # Same as TraceState.ERROR
agentops.end_trace(trace_context, end_state=StatusCode.UNSET) # Same as TraceState.UNSET
```
### String Values
String values are also supported for convenience:
```python theme={null}
# String representations
agentops.end_trace(trace_context, end_state="Success") # Maps to SUCCESS
agentops.end_trace(trace_context, end_state="Error") # Maps to ERROR
agentops.end_trace(trace_context, end_state="Indeterminate") # Maps to UNSET
```
### State Mapping
All state representations map to the same underlying OpenTelemetry StatusCode:
| AgentOps TraceState | OpenTelemetry StatusCode | String Values | Description |
| -------------------- | ------------------------ | --------------- | ----------------------------- |
| `TraceState.SUCCESS` | `StatusCode.OK` | "Success" | Trace completed successfully |
| `TraceState.ERROR` | `StatusCode.ERROR` | "Error" | Trace encountered an error |
| `TraceState.UNSET` | `StatusCode.UNSET` | "Indeterminate" | Trace state is not determined |
### Default Behavior
If no end state is provided, the default is `TraceState.SUCCESS`:
```python theme={null}
# These are equivalent
agentops.end_trace(trace_context)
agentops.end_trace(trace_context, end_state=TraceState.SUCCESS)
```
## Trace Attributes
Every trace collects comprehensive metadata to provide rich context for analysis. Trace attributes are automatically captured by AgentOps and fall into several categories:
### Core Trace Attributes
**Identity and Timing:**
* **Trace ID**: A unique identifier for the trace
* **Span ID**: Identifier for the root span of the trace
* **Start Time**: When the trace began
* **End Time**: When the trace completed (set automatically)
* **Duration**: Total execution time (calculated automatically)
**User-Defined Attributes:**
* **Trace Name**: Custom name provided when starting the trace
* **Tags**: Labels for filtering and grouping (list of strings or dictionary)
* **End State**: Success, error, or unset status
```python theme={null}
# Tags can be provided as a list of strings or a dictionary
agentops.start_trace("my_trace", tags=["production", "experiment-a"])
agentops.start_trace("my_trace", tags={"environment": "prod", "version": "1.2.3"})
```
### Resource Attributes
AgentOps automatically captures system and environment information:
**Project and Service:**
* **Project ID**: AgentOps project identifier
* **Service Name**: Service name (defaults to "agentops")
* **Service Version**: Version of your service
* **Environment**: Deployment environment (dev, staging, prod)
* **SDK Version**: AgentOps SDK version being used
**Host System Information:**
* **Host Name**: Machine hostname
* **Host System**: Operating system (Windows, macOS, Linux)
* **Host Version**: OS version details
* **Host Processor**: CPU architecture information
* **Host Machine**: Machine type identifier
**Performance Metrics:**
* **CPU Count**: Number of available CPU cores
* **CPU Percent**: CPU utilization at trace start
* **Memory Total**: Total system memory
* **Memory Available**: Available system memory
* **Memory Used**: Currently used memory
* **Memory Percent**: Memory utilization percentage
**Dependencies:**
* **Imported Libraries**: List of Python packages imported in your environment
### Span Hierarchy
**Nested Operations:**
* **Spans**: All spans (operations, agents, tools, workflows) recorded during the trace
* **Parent-Child Relationships**: Hierarchical structure of operations
* **Span Kinds**: Types of operations (agents, tools, workflows, tasks)
### Accessing Trace Attributes
While most attributes are automatically captured, you can access trace information programmatically:
```python theme={null}
import agentops
# Start a trace and get the context
trace_context = agentops.start_trace("my_workflow", tags={"version": "1.0"})
# Access trace information
trace_id = trace_context.span.get_span_context().trace_id
span_id = trace_context.span.get_span_context().span_id
print(f"Trace ID: {trace_id}")
print(f"Span ID: {span_id}")
# End the trace
agentops.end_trace(trace_context)
```
### Custom Attributes
You can add custom attributes to spans within your trace:
```python theme={null}
import agentops
with agentops.start_trace("custom_workflow") as trace:
# Add custom attributes to the current span
trace.span.set_attribute("custom.workflow.step", "data_processing")
trace.span.set_attribute("custom.batch.size", 100)
trace.span.set_attribute("custom.user.id", "user_123")
# Your workflow logic here
process_data()
```
### Attribute Naming Conventions
AgentOps follows OpenTelemetry semantic conventions for attribute naming:
* **AgentOps Specific**: `agentops.*` (e.g., `agentops.span.kind`)
* **GenAI Operations**: `gen_ai.*` (e.g., `gen_ai.request.model`)
* **System Resources**: Standard names (e.g., `host.name`, `service.name`)
* **Custom Attributes**: Use your own namespace (e.g., `myapp.user.id`)
## Trace Context
Traces create a context for all span recording. When a span is recorded:
1. It's associated with the current active trace
2. It's automatically included in the trace's timeline
3. It inherits the trace's tags for filtering and analysis
## Viewing Traces in the Dashboard
The AgentOps dashboard provides several views for analyzing your traces:
1. **Trace List**: Overview of all traces with filtering options
2. **Trace Details**: In-depth view of a single trace
3. **Timeline View**: Chronological display of all spans in a trace
4. **Tree View**: Hierarchical representation of agents, operations, and events
5. **Analytics**: Aggregated metrics across traces
## Best Practices
* **Start traces at logical boundaries** in your application workflow
* **Use descriptive trace names** to easily identify them in the dashboard
* **Apply consistent tags** to group related traces
* **Use fewer, longer traces** rather than many short ones for better analysis
* **Use automatic trace management** unless you have specific needs for manual control
* **Leverage context managers** for automatic trace lifecycle management
* **Set appropriate end states** to track success/failure rates
# Examples
Source: https://docs.agentops.ai/v2/examples/examples
Examples of AgentOps with various integrations
## Explore our examples to see AgentOps in action!
### LLM Integrations
} href="/v2/examples/anthropic">
Claude integration with tool usage and advanced features
} href="/v2/examples/google_genai">
Google Gemini models and their examples
} href="/v2/examples/litellm">
Unified LLM interface monitoring example
} href="/v2/examples/openai">
Advanced multi-tool orchestration with GPT models
} href="/v2/examples/watsonx">
Watsonx text chat integration example
} href="/v2/examples/xai">
Grok LLM basic usage patterns
### Agent Integrations
} href="/v2/examples/ag2">
Multi-agent conversations with memory capabilities
} href="/v2/examples/agno">
Modern AI agent framework with teams, workflows, and tool integration
} href="/v2/examples/autogen">
AG2 multi-agent workflow demonstration
} href="/v2/examples/crewai">
CrewAI multi-agent framework example
} href="/v2/examples/google_adk">
Google Agent Development Kit integration
} href="/v2/integrations/haystack">
Monitor your Haystack agents with AgentOps
} href="/v2/examples/openai_agents">
OpenAI Agents SDK workflow walkthrough
} href="/v2/examples/langchain">
LangChain callback handler integration
} href="/v2/examples/mem0">
Comprehensive memory operations with Mem0ai
} href="/v2/integrations/smolagents">
Track HuggingFace's smolagents with AgentOps seamlessly
# AG2
Source: https://docs.agentops.ai/v2/integrations/ag2
Track and analyze your AG2 agents with AgentOps
[AG2](https://ag2.ai/) (formerly AutoGen) is a framework for building multi-agent conversational AI systems. AgentOps provides seamless, automatic instrumentation for AG2 — just call `agentops.init()` and all agent interactions are tracked.
## Installation
```bash pip theme={null}
pip install agentops pyautogen
```
```bash poetry theme={null}
poetry add agentops pyautogen
```
```bash uv theme={null}
uv pip install agentops pyautogen
```
## Setting Up API Keys
Before using AG2 with AgentOps, you need to set up your API keys. You can obtain:
* **OPENAI\_API\_KEY**: From the [OpenAI Platform](https://platform.openai.com/api-keys)
* **AGENTOPS\_API\_KEY**: From your [AgentOps Dashboard](https://app.agentops.ai/)
Then to set them up, you can either export them as environment variables or set them in a `.env` file.
```bash Export to CLI theme={null}
export OPENAI_API_KEY="your_openai_api_key_here"
export AGENTOPS_API_KEY="your_agentops_api_key_here"
```
```txt Set in .env file theme={null}
OPENAI_API_KEY="your_openai_api_key_here"
AGENTOPS_API_KEY="your_agentops_api_key_here"
```
Then load the environment variables in your Python code:
```python theme={null}
from dotenv import load_dotenv
import os
# Load environment variables from .env file
load_dotenv()
# Set up environment variables with fallback values
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")
os.environ["AGENTOPS_API_KEY"] = os.getenv("AGENTOPS_API_KEY")
```
## Usage
Initialize AgentOps at the beginning of your application to automatically track all AG2 agent interactions:
```python Single Agent Conversation theme={null}
import agentops
import autogen
import os
# Initialize AgentOps
agentops.init()
# Configure your AG2 agents
config_list = [
{
"model": "gpt-4",
"api_key": os.getenv("OPENAI_API_KEY"),
}
]
llm_config = {
"config_list": config_list,
"timeout": 60,
}
# Create a single agent
assistant = autogen.AssistantAgent(
name="assistant",
llm_config=llm_config,
system_message="You are a helpful AI assistant."
)
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="TERMINATE",
max_consecutive_auto_reply=10,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config={"last_n_messages": 3, "work_dir": "coding"},
)
# Initiate a conversation
user_proxy.initiate_chat(
assistant,
message="How can I implement a basic web scraper in Python?"
)
```
## Examples
AG2 Async Agent Chat with Automated Responses
Demonstrates asynchronous human input with AG2 agents.
Example of AG2 agents using a Wikipedia search tool.
Orchestrate a team of specialized agents (researcher, coder, critic) with full AgentOps tracing.
## Resources
Official AG2 documentation on integrating with AgentOps.
Full observability for multi-agent systems with AG2's built-in tracing.
# Agno
Source: https://docs.agentops.ai/v2/integrations/agno
Track your Agno agents, teams, and workflows with AgentOps
## Video Tutorial
[Agno](https://docs.agno.com) is a modern AI agent framework for building intelligent agents, teams, and workflows. AgentOps provides automatic instrumentation to track all Agno operations including agent interactions, team coordination, tool usage, and workflow execution.
## Installation
Install AgentOps and Agno:
```bash pip theme={null}
pip install agentops agno
```
```bash poetry theme={null}
poetry add agentops agno
```
```bash uv theme={null}
uv pip install agentops agno
```
## Setting Up API Keys
You'll need API keys for AgentOps and your chosen LLM provider:
* **AGENTOPS\_API\_KEY**: From your [AgentOps Dashboard](https://app.agentops.ai/)
* **OPENAI\_API\_KEY**: From the [OpenAI Platform](https://platform.openai.com/api-keys) (if using OpenAI)
* **ANTHROPIC\_API\_KEY**: From [Anthropic Console](https://console.anthropic.com/) (if using Claude)
Set these as environment variables or in a `.env` file.
```bash Export to CLI theme={null}
export AGENTOPS_API_KEY="your_agentops_api_key_here"
export OPENAI_API_KEY="your_openai_api_key_here"
export ANTHROPIC_API_KEY="your_anthropic_api_key_here" # Optional
```
```txt Set in .env file theme={null}
AGENTOPS_API_KEY="your_agentops_api_key_here"
OPENAI_API_KEY="your_openai_api_key_here"
ANTHROPIC_API_KEY="your_anthropic_api_key_here" # Optional
```
## Quick Start
```python theme={null}
import os
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
from agno.agent import Agent
from agno.team import Team
from agno.models.openai import OpenAIChat
# Initialize AgentOps
import agentops
agentops.init(api_key=os.getenv("AGENTOPS_API_KEY"))
# Create and run an agent
agent = Agent(
name="Assistant",
role="Helpful AI assistant",
model=OpenAIChat(id="gpt-4o-mini")
)
response = agent.run("What are the key benefits of AI agents?")
print(response.content)
```
## AgentOps Integration
### Basic Agent Tracking
AgentOps automatically instruments Agno agents and teams:
```python theme={null}
import agentops
from agno.agent import Agent
from agno.team import Team
from agno.models.openai import OpenAIChat
# Initialize AgentOps - this enables automatic tracking
agentops.init(api_key=os.getenv("AGENTOPS_API_KEY"))
# Create agents - automatically tracked by AgentOps
agent = Agent(
name="Assistant",
role="Helpful AI assistant",
model=OpenAIChat(id="gpt-4o-mini")
)
# Create teams - coordination automatically tracked
team = Team(
name="Research Team",
mode="coordinate",
members=[agent]
)
# All operations are automatically logged to AgentOps
response = team.run("Analyze the current AI market trends")
print(response.content)
```
## What Gets Tracked
AgentOps automatically captures:
* **Agent Interactions**: All agent inputs, outputs, and configurations
* **Team Coordination**: Multi-agent collaboration patterns and results
* **Tool Executions**: Function calls, parameters, and return values
* **Workflow Steps**: Session states, caching, and performance metrics
* **Token Usage**: Costs and resource consumption across all operations
* **Timing Metrics**: Response times and concurrent operation performance
* **Error Tracking**: Failures and debugging information
## Dashboard and Monitoring
Once your Agno agents are running with AgentOps, you can monitor them in the [AgentOps Dashboard](https://app.agentops.ai/):
* **Real-time Monitoring**: Live agent status and performance
* **Execution Traces**: Detailed logs of agent interactions
* **Performance Analytics**: Token usage, costs, and timing metrics
* **Team Collaboration**: Visual representation of multi-agent workflows
* **Error Tracking**: Comprehensive error logs and debugging information
## Examples
Learn the fundamentals of creating AI agents and organizing them into collaborative teams
Execute multiple AI tasks concurrently for improved performance using asyncio
Build sophisticated multi-agent teams with specialized tools for comprehensive research
Implement Retrieval-Augmented Generation with vector databases and knowledge bases
Create custom workflows with intelligent caching for optimized agent performance
# Anthropic
Source: https://docs.agentops.ai/v2/integrations/anthropic
Track and analyze your Anthropic API calls with AgentOps
AgentOps provides seamless integration with [Anthropic's Python SDK](https://github.com/anthropics/anthropic-sdk-python), allowing you to track and analyze all your Claude model interactions automatically.
## Installation
```bash pip theme={null}
pip install agentops anthropic
```
```bash poetry theme={null}
poetry add agentops anthropic
```
```bash uv theme={null}
uv pip install agentops anthropic
```
## Setting Up API Keys
Before using Anthropic with AgentOps, you need to set up your API keys. You can obtain:
* **ANTHROPIC\_API\_KEY**: From the [Anthropic Console](https://console.anthropic.com/)
* **AGENTOPS\_API\_KEY**: From your [AgentOps Dashboard](https://app.agentops.ai/)
Then to set them up, you can either export them as environment variables or set them in a `.env` file.
```bash Export to CLI theme={null}
export ANTHROPIC_API_KEY="your_anthropic_api_key_here"
export AGENTOPS_API_KEY="your_agentops_api_key_here"
```
```txt Set in .env file theme={null}
ANTHROPIC_API_KEY="your_anthropic_api_key_here"
AGENTOPS_API_KEY="your_agentops_api_key_here"
```
Then load the environment variables in your Python code:
```python theme={null}
from dotenv import load_dotenv
import os
# Load environment variables from .env file
load_dotenv()
# Set up environment variables with fallback values
os.environ["ANTHROPIC_API_KEY"] = os.getenv("ANTHROPIC_API_KEY")
os.environ["AGENTOPS_API_KEY"] = os.getenv("AGENTOPS_API_KEY")
```
## Usage
Initialize AgentOps at the beginning of your application to automatically track all Anthropic API calls:
```python theme={null}
import agentops
import anthropic
# Initialize AgentOps
agentops.init()
# Create Anthropic client
client = anthropic.Anthropic()
# Make a completion request - AgentOps will track it automatically
message = client.messages.create(
model="claude-sonnet-4-20250514",
messages=[
{"role": "user", "content": "What is artificial intelligence?"}
]
)
# Print the response received
print(message.content[0].text)
```
## Examples
```python Streaming theme={null}
import agentops
import anthropic
# Initialize AgentOps
agentops.init()
# Create Anthropic client
client = anthropic.Anthropic()
# Make a streaming request
with client.messages.stream(
model="claude-sonnet-4-20250514",
messages=[
{"role": "user", "content": "Write a short poem about artificial intelligence."}
]
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
print()
```
```python Tool Use theme={null}
import agentops
import anthropic
import json
from datetime import datetime
# Initialize AgentOps
agentops.init()
# Create Anthropic client
client = anthropic.Anthropic()
# Define tools
tools = [
{
"type": "custom",
"name": "get_current_time",
"description": "Get the current date and time",
"input_schema": {
"type": "object",
"properties": {},
"required": []
}
}
]
def get_current_time():
return {"current_time": datetime.now().isoformat()}
# Make a request with tools
message = client.messages.create(
model="claude-opus-4-20250514",
tools=tools,
messages=[
{"role": "user", "content": "What time is it now?"}
]
)
# Handle tool use
if message.content[0].type == "tool_calls":
tool_call = message.content[0].tool_calls[0]
tool_name = tool_call.name
if tool_name == "get_current_time":
tool_response = get_current_time()
# Continue the conversation with the tool response
second_message = client.messages.create(
model="claude-opus-4-20250514",
messages=[
{"role": "user", "content": "What time is it now?"},
{
"role": "assistant",
"content": [
{
"type": "tool_calls",
"tool_calls": [
{
"type": "custom",
"name": "get_current_time",
"input": {}
}
]
}
]
},
{
"role": "tool",
"content": json.dumps(tool_response),
"tool_call_id": tool_call.id
}
]
)
print(second_message.content[0].text)
else:
print(message.content[0].text)
```
## More Examples
Claude integration with tool usage and advanced features
Shows synchronous calls with the Anthropic SDK.
Demonstrates asynchronous calls with the Anthropic SDK.
# AutoGen
Source: https://docs.agentops.ai/v2/integrations/autogen
Integrate AgentOps with Microsoft AutoGen for multi-agent workflow tracking
[AutoGen](https://microsoft.github.io/autogen/stable/) is Microsoft's framework for building multi-agent conversational AI systems. AgentOps provides seamless integration with AutoGen to track and monitor your multi-agent workflows.
## Installation
```bash pip theme={null}
pip install agentops autogen-core python-dotenv
```
```bash poetry theme={null}
poetry add agentops autogen-core python-dotenv
```
```bash uv theme={null}
uv pip install agentops autogen-core python-dotenv
```
## Setting Up API Keys
Before using AutoGen with AgentOps, you need to set up your API keys. You can obtain:
* **OPENAI\_API\_KEY**: From the [OpenAI Platform](https://platform.openai.com/api-keys)
* **AGENTOPS\_API\_KEY**: From your [AgentOps Dashboard](https://app.agentops.ai/)
Then to set them up, you can either export them as environment variables or set them in a `.env` file.
```bash Export to CLI theme={null}
export OPENAI_API_KEY="your_openai_api_key_here"
export AGENTOPS_API_KEY="your_agentops_api_key_here"
```
```txt Set in .env file theme={null}
OPENAI_API_KEY="your_openai_api_key_here"
AGENTOPS_API_KEY="your_agentops_api_key_here"
```
Then load the environment variables in your Python code:
```python theme={null}
from dotenv import load_dotenv
import os
# Load environment variables from .env file
load_dotenv()
# Set up environment variables with fallback values
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")
os.environ["AGENTOPS_API_KEY"] = os.getenv("AGENTOPS_API_KEY")
```
## Usage
AgentOps automatically instruments AutoGen agents and tracks their interactions. Simply initialize AgentOps before creating your AutoGen agents!
```python Countdown theme={null}
import asyncio
from dataclasses import dataclass
from typing import Callable
import agentops
from autogen_core import (
DefaultTopicId,
MessageContext,
RoutedAgent,
default_subscription,
message_handler,
AgentId,
SingleThreadedAgentRuntime
)
# Initialize AgentOps
agentops.init()
@dataclass
class CountdownMessage:
"""Message containing a number for countdown operations"""
content: int
@default_subscription
class ModifierAgent(RoutedAgent):
"""Agent that modifies numbers by applying a transformation function"""
def __init__(self, modify_val: Callable[[int], int]) -> None:
super().__init__("A modifier agent that transforms numbers.")
self._modify_val = modify_val
@message_handler
async def handle_message(self, message: CountdownMessage, ctx: MessageContext) -> None:
"""Handle incoming messages and apply modification"""
original_val = message.content
modified_val = self._modify_val(original_val)
print(f"🔧 ModifierAgent: Transformed {original_val} → {modified_val}")
# Publish the modified value to continue the workflow
await self.publish_message(
CountdownMessage(content=modified_val),
DefaultTopicId()
)
@default_subscription
class CheckerAgent(RoutedAgent):
"""Agent that checks if a condition is met and decides whether to continue"""
def __init__(self, stop_condition: Callable[[int], bool]) -> None:
super().__init__("A checker agent that validates conditions.")
self._stop_condition = stop_condition
@message_handler
async def handle_message(self, message: CountdownMessage, ctx: MessageContext) -> None:
"""Handle incoming messages and check stopping condition"""
value = message.content
if not self._stop_condition(value):
print(f"✅ CheckerAgent: {value} passed validation, continuing workflow")
# Continue the workflow by publishing the message
await self.publish_message(
CountdownMessage(content=value),
DefaultTopicId()
)
else:
print(f"🛑 CheckerAgent: {value} failed validation, stopping workflow")
print("🎉 Countdown completed successfully!")
async def run_countdown_workflow():
"""Run a countdown workflow from 10 to 1 using AutoGen agents"""
print("🚀 Starting AutoGen Countdown Workflow")
print("=" * 50)
# Create the AutoGen runtime
runtime = SingleThreadedAgentRuntime()
# Register the modifier agent (subtracts 1 from each number)
await ModifierAgent.register(
runtime,
"modifier",
lambda: ModifierAgent(modify_val=lambda x: x - 1),
)
# Register the checker agent (stops when value <= 1)
await CheckerAgent.register(
runtime,
"checker",
lambda: CheckerAgent(stop_condition=lambda x: x <= 1),
)
# Start the runtime
runtime.start()
print("🤖 AutoGen runtime started")
print("📨 Sending initial message with value: 10")
# Send initial message to start the countdown
await runtime.send_message(
CountdownMessage(10),
AgentId("checker", "default")
)
# Wait for the workflow to complete
await runtime.stop_when_idle()
print("=" * 50)
print("✨ Workflow completed! Check your AgentOps dashboard for detailed traces.")
# Run the workflow
if __name__ == "__main__":
asyncio.run(run_countdown_workflow())
```
```python Multi-Agent theme={null}
import asyncio
from dataclasses import dataclass
from typing import List, Dict, Any
import agentops
from autogen_core import (
DefaultTopicId,
MessageContext,
RoutedAgent,
default_subscription,
message_handler,
AgentId,
SingleThreadedAgentRuntime
)
# Initialize AgentOps
agentops.init()
@dataclass
class DataMessage:
"""Message containing data to be processed"""
data: List[Dict[str, Any]]
stage: str
metadata: Dict[str, Any]
@default_subscription
class DataCollectorAgent(RoutedAgent):
"""Agent responsible for collecting and preparing initial data"""
def __init__(self) -> None:
super().__init__("Data collector agent that gathers initial dataset.")
@message_handler
async def handle_message(self, message: DataMessage, ctx: MessageContext) -> None:
print(f"📊 DataCollector: Collecting data for {message.metadata.get('source', 'unknown')}")
# Simulate data collection
collected_data = [
{"id": 1, "value": 100, "category": "A"},
{"id": 2, "value": 200, "category": "B"},
{"id": 3, "value": 150, "category": "A"},
{"id": 4, "value": 300, "category": "C"},
]
print(f"✅ DataCollector: Collected {len(collected_data)} records")
# Send to processor
await self.publish_message(
DataMessage(
data=collected_data,
stage="processing",
metadata={**message.metadata, "collected_count": len(collected_data)}
),
DefaultTopicId()
)
@default_subscription
class DataProcessorAgent(RoutedAgent):
"""Agent that processes and transforms data"""
def __init__(self) -> None:
super().__init__("Data processor agent that transforms collected data.")
@message_handler
async def handle_message(self, message: DataMessage, ctx: MessageContext) -> None:
if message.stage != "processing":
return
print(f"⚙️ DataProcessor: Processing {len(message.data)} records")
# Process data - add calculated fields
processed_data = []
for item in message.data:
processed_item = {
**item,
"processed_value": item["value"] * 1.1, # 10% increase
"status": "processed"
}
processed_data.append(processed_item)
print(f"✅ DataProcessor: Processed {len(processed_data)} records")
# Send to analyzer
await self.publish_message(
DataMessage(
data=processed_data,
stage="analysis",
metadata={**message.metadata, "processed_count": len(processed_data)}
),
DefaultTopicId()
)
@default_subscription
class DataAnalyzerAgent(RoutedAgent):
"""Agent that analyzes processed data and generates insights"""
def __init__(self) -> None:
super().__init__("Data analyzer agent that generates insights.")
@message_handler
async def handle_message(self, message: DataMessage, ctx: MessageContext) -> None:
if message.stage != "analysis":
return
print(f"🧠 DataAnalyzer: Analyzing {len(message.data)} records")
# Perform analysis
total_value = sum(item["processed_value"] for item in message.data)
avg_value = total_value / len(message.data)
categories = set(item["category"] for item in message.data)
analysis_results = {
"total_records": len(message.data),
"total_value": total_value,
"average_value": avg_value,
"unique_categories": len(categories),
"categories": list(categories)
}
print(f"📈 DataAnalyzer: Analysis complete")
print(f" • Total records: {analysis_results['total_records']}")
print(f" • Average value: {analysis_results['average_value']:.2f}")
print(f" • Categories: {', '.join(analysis_results['categories'])}")
# Send to reporter
await self.publish_message(
DataMessage(
data=message.data,
stage="reporting",
metadata={
**message.metadata,
"analysis": analysis_results
}
),
DefaultTopicId()
)
@default_subscription
class ReportGeneratorAgent(RoutedAgent):
"""Agent that generates final reports"""
def __init__(self) -> None:
super().__init__("Report generator agent that creates final output.")
@message_handler
async def handle_message(self, message: DataMessage, ctx: MessageContext) -> None:
if message.stage != "reporting":
return
print(f"📝 ReportGenerator: Generating final report")
analysis = message.metadata.get("analysis", {})
report = f"""
🎯 DATA PROCESSING REPORT
========================
Source: {message.metadata.get('source', 'Unknown')}
Processing Date: {message.metadata.get('timestamp', 'Unknown')}
📊 SUMMARY STATISTICS:
• Total Records Processed: {analysis.get('total_records', 0)}
• Total Value: ${analysis.get('total_value', 0):,.2f}
• Average Value: ${analysis.get('average_value', 0):,.2f}
• Unique Categories: {analysis.get('unique_categories', 0)}
• Categories Found: {', '.join(analysis.get('categories', []))}
✅ Processing pipeline completed successfully!
"""
print(report)
print("🎉 Multi-agent data processing workflow completed!")
async def run_data_processing_pipeline():
"""Run a complete data processing pipeline using multiple AutoGen agents"""
print("🚀 Starting AutoGen Data Processing Pipeline")
print("=" * 60)
# Create runtime
runtime = SingleThreadedAgentRuntime()
# Register all agents
await DataCollectorAgent.register(
runtime,
"collector",
lambda: DataCollectorAgent(),
)
await DataProcessorAgent.register(
runtime,
"processor",
lambda: DataProcessorAgent(),
)
await DataAnalyzerAgent.register(
runtime,
"analyzer",
lambda: DataAnalyzerAgent(),
)
await ReportGeneratorAgent.register(
runtime,
"reporter",
lambda: ReportGeneratorAgent(),
)
# Start runtime
runtime.start()
print("🤖 AutoGen runtime with 4 agents started")
# Trigger the pipeline
initial_message = DataMessage(
data=[],
stage="collection",
metadata={
"source": "customer_database",
"timestamp": "2024-01-15T10:30:00Z",
"pipeline_id": "data_proc_001"
}
)
print("📨 Triggering data processing pipeline...")
await runtime.send_message(
initial_message,
AgentId("collector", "default")
)
# Wait for completion
await runtime.stop_when_idle()
print("=" * 60)
print("✨ Pipeline completed! Check AgentOps dashboard for detailed agent traces.")
# Run the pipeline
if __name__ == "__main__":
asyncio.run(run_data_processing_pipeline())
```
## Examples
Basic multi-agent chat functionality
Demonstrates an agent specialized for mathematical problem-solving.
Visit your [AgentOps Dashboard](https://app.agentops.ai) to see detailed traces of your AutoGen agent interactions, performance metrics, and workflow analytics.
# CrewAI
Source: https://docs.agentops.ai/v2/integrations/crewai
AgentOps and CrewAI teamed up to make monitoring Crew agents dead simple.
## Video Tutorial
[CrewAI](https://www.crewai.com/) is a framework for easily building multi-agent applications. AgentOps integrates with CrewAI to provide observability into your agent workflows. Crew has comprehensive [documentation](https://docs.crewai.com) available as well as a great [quickstart](https://docs.crewai.com/how-to/Creating-a-Crew-and-kick-it-off/) guide.
## Installation
Install AgentOps and CrewAI, along with `python-dotenv` for managing API keys:
```bash pip theme={null}
pip install agentops crewai python-dotenv
```
```bash poetry theme={null}
poetry add agentops crewai python-dotenv
```
```bash uv theme={null}
uv pip install agentops crewai python-dotenv
```
## Setting Up API Keys
You'll need API keys for AgentOps and OpenAI (since CrewAI's built-in `LLM` uses OpenAI models by default):
* **OPENAI\_API\_KEY**: From the [OpenAI Platform](https://platform.openai.com/api-keys)
* **AGENTOPS\_API\_KEY**: From your [AgentOps Dashboard](https://app.agentops.ai/)
Set these as environment variables or in a `.env` file.
```bash Export to CLI theme={null}
export OPENAI_API_KEY="your_openai_api_key_here"
export AGENTOPS_API_KEY="your_agentops_api_key_here"
```
```txt Set in .env file theme={null}
OPENAI_API_KEY="your_openai_api_key_here"
AGENTOPS_API_KEY="your_agentops_api_key_here"
```
Then load them in your Python code:
```python theme={null}
from dotenv import load_dotenv
import os
load_dotenv()
AGENTOPS_API_KEY = os.getenv("AGENTOPS_API_KEY")
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
```
## Usage
Simply initialize AgentOps at the beginning of your CrewAI application. AgentOps automatically instruments CrewAI components—including its `LLM`—to track your agent interactions.
Here's how to set up a basic CrewAI application with AgentOps:
```python theme={null}
import agentops
from crewai import Agent, Task, Crew, LLM
# Initialize AgentOps client
agentops.init()
# Define the LLM to use with CrewAI
llm = LLM(
model="openai/gpt-4o", # Or your preferred model
temperature=0.7,
)
# Create an agent
researcher = Agent(
role='Researcher',
goal='Research and provide accurate information about cities and their history',
backstory='You are an expert researcher with vast knowledge of world geography and history.',
llm=llm,
verbose=True
)
# Create a task
research_task = Task(
description='What is the capital of France? Provide a detailed answer about its history, culture, and significance.',
expected_output='A comprehensive response about Paris, including its status as the capital of France, historical significance, cultural importance, and key landmarks.',
agent=researcher
)
# Create a crew with the researcher
crew = Crew(
agents=[researcher],
tasks=[research_task],
verbose=True
)
# Execute the task
result = crew.kickoff()
print("\nCrew Research Results:")
print(result)
```
## Examples
Create job postings with a crew of specialized agents
Validate and improve markdown content using CrewAI agents
# Google ADK
Source: https://docs.agentops.ai/v2/integrations/google_adk
Track and analyze your Google Agent Development Kit (ADK) AI agents with AgentOps
AgentOps provides seamless integration with [Google Agent Development Kit (ADK)](https://google.github.io/adk-docs/), allowing you to track and analyze all your ADK agent interactions automatically.
## Installation
```bash pip theme={null}
pip install agentops google-adk
```
```bash poetry theme={null}
poetry add agentops google-adk
```
```bash uv theme={null}
uv pip install agentops google-adk
```
## Setting Up API Keys
Before using Google ADK with AgentOps, you need to set up your API keys. You can obtain:
* **GOOGLE\_API\_KEY**: From the [Google AI Studio](https://aistudio.google.com/app/apikey)
* **AGENTOPS\_API\_KEY**: From your [AgentOps Dashboard](https://app.agentops.ai/)
Then to set them up, you can either export them as environment variables or set them in a `.env` file.
```bash Export to CLI theme={null}
export GOOGLE_API_KEY="your_google_api_key_here"
export AGENTOPS_API_KEY="your_agentops_api_key_here"
```
```txt Set in .env file theme={null}
GOOGLE_API_KEY="your_google_api_key_here"
AGENTOPS_API_KEY="your_agentops_api_key_here"
```
Then load the environment variables in your Python code:
```python theme={null}
from dotenv import load_dotenv
import os
# Load environment variables from .env file
load_dotenv()
# Set up environment variables with fallback values
os.environ["GOOGLE_API_KEY"] = os.getenv("GOOGLE_API_KEY")
os.environ["AGENTOPS_API_KEY"] = os.getenv("AGENTOPS_API_KEY")
```
## Usage
Initialize AgentOps at the beginning of your application to automatically track all Google ADK agent interactions:
```python theme={null}
import asyncio
import json
from pydantic import BaseModel, Field
import agentops
from google.adk.agents import LlmAgent
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService
from google.genai import types
agentops.init()
# --- 1. Define Constants ---
APP_NAME = "agent_comparison_app"
USER_ID = "test_user_456"
SESSION_ID_TOOL_AGENT = "session_tool_agent_xyz"
SESSION_ID_SCHEMA_AGENT = "session_schema_agent_xyz"
MODEL_NAME = "gemini-2.0-flash"
# --- 2. Define Schemas ---
# Input schema used by both agents
class CountryInput(BaseModel):
country: str = Field(description="The country to get information about.")
# Output schema ONLY for the second agent
class CapitalInfoOutput(BaseModel):
capital: str = Field(description="The capital city of the country.")
# Note: Population is illustrative; the LLM will infer or estimate this
# as it cannot use tools when output_schema is set.
population_estimate: str = Field(description="An estimated population of the capital city.")
# --- 3. Define the Tool (Only for the first agent) ---
def get_capital_city(country: str) -> str:
"""Retrieves the capital city of a given country."""
print(f"\n-- Tool Call: get_capital_city(country='{country}') --")
country_capitals = {
"united states": "Washington, D.C.",
"canada": "Ottawa",
"france": "Paris",
"japan": "Tokyo",
}
result = country_capitals.get(country.lower(), f"Sorry, I couldn't find the capital for {country}.")
print(f"-- Tool Result: '{result}' --")
return result
# --- 4. Configure Agents ---
# Agent 1: Uses a tool and output_key
capital_agent_with_tool = LlmAgent(
model=MODEL_NAME,
name="capital_agent_tool",
description="Retrieves the capital city using a specific tool.",
instruction="""You are a helpful agent that provides the capital city of a country using a tool.
The user will provide the country name in a JSON format like {"country": "country_name"}.
1. Extract the country name.
2. Use the `get_capital_city` tool to find the capital.
3. Respond clearly to the user, stating the capital city found by the tool.
""",
tools=[get_capital_city],
input_schema=CountryInput,
output_key="capital_tool_result", # Store final text response
)
# Agent 2: Uses output_schema (NO tools possible)
structured_info_agent_schema = LlmAgent(
model=MODEL_NAME,
name="structured_info_agent_schema",
description="Provides capital and estimated population in a specific JSON format.",
instruction=f"""You are an agent that provides country information.
The user will provide the country name in a JSON format like {{"country": "country_name"}}.
Respond ONLY with a JSON object matching this exact schema:
{json.dumps(CapitalInfoOutput.model_json_schema(), indent=2)}
Use your knowledge to determine the capital and estimate the population. Do not use any tools.
""",
# *** NO tools parameter here - using output_schema prevents tool use ***
input_schema=CountryInput,
output_schema=CapitalInfoOutput, # Enforce JSON output structure
output_key="structured_info_result", # Store final JSON response
)
# --- 5. Set up Session Management and Runners ---
session_service = InMemorySessionService()
# Create a runner for EACH agent
capital_runner = Runner(
agent=capital_agent_with_tool,
app_name=APP_NAME,
session_service=session_service
)
structured_runner = Runner(
agent=structured_info_agent_schema,
app_name=APP_NAME,
session_service=session_service
)
# --- 6. Define Agent Interaction Logic ---
async def call_agent_and_print(
runner_instance: Runner,
agent_instance: LlmAgent,
session_id: str,
query_json: str
):
"""Sends a query to the specified agent/runner and prints results."""
print(f"\n>>> Calling Agent: '{agent_instance.name}' | Query: {query_json}")
user_content = types.Content(role='user', parts=[types.Part(text=query_json)])
final_response_content = "No final response received."
async for event in runner_instance.run_async(user_id=USER_ID, session_id=session_id, new_message=user_content):
# print(f"Event: {event.type}, Author: {event.author}") # Uncomment for detailed logging
if event.is_final_response() and event.content and event.content.parts:
# For output_schema, the content is the JSON string itself
final_response_content = event.content.parts[0].text
print(f"<<< Agent '{agent_instance.name}' Response: {final_response_content}")
current_session = await session_service.get_session(app_name=APP_NAME,
user_id=USER_ID,
session_id=session_id)
stored_output = current_session.state.get(agent_instance.output_key)
# Pretty print if the stored output looks like JSON (likely from output_schema)
print(f"--- Session State ['{agent_instance.output_key}']: ", end="")
try:
# Attempt to parse and pretty print if it's JSON
parsed_output = json.loads(stored_output)
print(json.dumps(parsed_output, indent=2))
except (json.JSONDecodeError, TypeError):
# Otherwise, print as string
print(stored_output)
print("-" * 30)
# --- 7. Run Interactions ---
async def main():
# Create sessions
await session_service.create_session(app_name=APP_NAME, user_id=USER_ID, session_id=SESSION_ID_TOOL_AGENT)
await session_service.create_session(app_name=APP_NAME, user_id=USER_ID, session_id=SESSION_ID_SCHEMA_AGENT)
print("--- Testing Agent with Tool ---")
await call_agent_and_print(capital_runner, capital_agent_with_tool, SESSION_ID_TOOL_AGENT, '{"country": "France"}')
print("\n\n--- Testing Agent with Output Schema (No Tool Use) ---")
await call_agent_and_print(structured_runner, structured_info_agent_schema, SESSION_ID_SCHEMA_AGENT, '{"country": "Japan"}')
asyncio.run(main())
```
## Examples
Implement human-in-the-loop approval workflows with Google ADK agents
Visit your [AgentOps Dashboard](https://app.agentops.ai) to see detailed traces of your Google ADK agent interactions, tool usage, and session management.
# Google Generative AI
Source: https://docs.agentops.ai/v2/integrations/google_generative_ai
Monitor and analyze your Google Gemini API calls with AgentOps
AgentOps provides seamless integration with [Google's Generative AI API](https://ai.google.dev/), allowing you to monitor and analyze all your Gemini model interactions automatically.
## Installation
```bash pip theme={null}
pip install agentops google-genai
```
```bash poetry theme={null}
poetry add agentops google-genai
```
```bash uv theme={null}
uv pip install agentops google-genai
```
## Setting Up API Keys
Before using Google Gemini with AgentOps, you need to set up your API keys. You can obtain:
* **GOOGLE\_API\_KEY**: From the [Google AI Studio](https://aistudio.google.com/app/apikey)
* **AGENTOPS\_API\_KEY**: From your [AgentOps Dashboard](https://app.agentops.ai/)
Then to set them up, you can either export them as environment variables or set them in a `.env` file.
```bash Export to CLI theme={null}
export GOOGLE_API_KEY="your_google_api_key_here"
export AGENTOPS_API_KEY="your_agentops_api_key_here"
```
```txt Set in .env file theme={null}
GOOGLE_API_KEY="your_google_api_key_here"
AGENTOPS_API_KEY="your_agentops_api_key_here"
```
Then load the environment variables in your Python code:
```python theme={null}
from dotenv import load_dotenv
import os
# Load environment variables from .env file
load_dotenv()
# Set up environment variables with fallback values
os.environ["GOOGLE_API_KEY"] = os.getenv("GOOGLE_API_KEY")
os.environ["AGENTOPS_API_KEY"] = os.getenv("AGENTOPS_API_KEY")
```
## Usage
Initialize AgentOps at the beginning of your application to automatically track all Gemini API calls.
```python Streaming theme={null}
import agentops
from google import genai
# Initialize AgentOps
agentops.init()
# Create a client
client = genai.Client(api_key="YOUR_GEMINI_API_KEY")
# Generate streaming content
for chunk in client.models.generate_content_stream(
model='gemini-2.0-flash-001',
contents='Explain quantum computing in simple terms.',
):
print(chunk.text, end="", flush=True)
```
```python Simple Chat theme={null}
import agentops
from google import genai
# Initialize AgentOps
agentops.init()
# Create a client
client = genai.Client(api_key="YOUR_GEMINI_API_KEY")
# Start a chat session
chat = client.chats.create(model='gemini-2.0-flash-001')
# Send messages and get responses
response = chat.send_message('Hello, how can you help me with AI development?')
print(response.text)
# Continue the conversation
response = chat.send_message('What are the best practices for prompt engineering?')
print(response.text)
```
## Examples
Basic Gemini usage with AgentOps
For more information on using the Google Gen AI SDK, refer to the [official documentation](https://googleapis.github.io/python-genai/).
# Haystack
Source: https://docs.agentops.ai/v2/integrations/haystack
Monitor your Haystack agents with AgentOps
[Haystack](https://docs.haystack.deepset.ai/docs/installation) is a flexible framework for building production-ready AI agents. AgentOps makes monitoring your Haystack agents seamless.
## Installation
```bash pip theme={null}
pip install agentops haystack-ai python-dotenv
```
```bash poetry theme={null}
poetry add agentops haystack-ai python-dotenv
```
```bash uv theme={null}
uv pip install agentops haystack-ai python-dotenv
```
We currently only support Haystack 2.x.
## Setting Up API Keys
You'll need API keys for AgentOps and whatever model provider you use with Haystack (e.g. OpenAI):
* **AGENTOPS\_API\_KEY**: From your [AgentOps Dashboard](https://app.agentops.ai/)
* **OPENAI\_API\_KEY**: From the [OpenAI Platform](https://platform.openai.com/api-keys) *(if using OpenAI models)*
Set these as environment variables or in a `.env` file.
```bash Export to CLI theme={null}
export AGENTOPS_API_KEY="your_agentops_api_key_here"
export OPENAI_API_KEY="your_openai_api_key_here"
```
```txt Set in .env file theme={null}
AGENTOPS_API_KEY="your_agentops_api_key_here"
OPENAI_API_KEY="your_openai_api_key_here"
```
Then load them in your Python code:
```python theme={null}
from dotenv import load_dotenv
import os
load_dotenv()
AGENTOPS_API_KEY = os.getenv("AGENTOPS_API_KEY")
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
```
## Usage
Integrating AgentOps with Haystack only takes a few lines:
```python theme={null}
import agentops
from haystack.components.generators.openai import OpenAIGenerator
agentops.init(AGENTOPS_API_KEY)
generator = OpenAIGenerator(model="gpt-4o-mini", api_key=OPENAI_API_KEY)
result = generator.run(prompt="In one sentence, what is AgentOps?")
print(result["replies"][0])
```
Run your script and visit the [AgentOps Dashboard](https://app.agentops.ai/drilldown) to monitor the trace.
## Examples
* [Simple Haystack example (OpenAI)](https://github.com/AgentOps-AI/agentops/blob/main/examples/haystack/haystack_example.py)
* [Haystack Azure OpenAI Chat example](https://github.com/AgentOps-AI/agentops/blob/main/examples/haystack/azure_haystack_example.py)
# IBM Watsonx.ai
Source: https://docs.agentops.ai/v2/integrations/ibm_watsonx_ai
Track and analyze your IBM Watsonx.ai API calls with AgentOps
AgentOps provides seamless integration with [IBM Watsonx.ai Python SDK](https://ibm.github.io/watsonx-ai-python-sdk/), allowing you to track and analyze all your Watsonx.ai model interactions automatically.
## Installation
```bash pip theme={null}
pip install agentops ibm-watsonx-ai
```
```bash poetry theme={null}
poetry add agentops ibm-watsonx-ai
```
```bash uv theme={null}
uv pip install agentops ibm-watsonx-ai
```
## Setting Up API Keys
Before using IBM Watsonx.ai with AgentOps, you need to set up your API keys. You can obtain:
* **IBM\_WATSONX\_API\_KEY**: From your [IBM Cloud account](https://cloud.ibm.com/)
* **IBM\_WATSONX\_URL**: The URL for your Watsonx.ai instance, typically found in your IBM Cloud dashboard.
* **IBM\_WATSONX\_PROJECT\_ID**: The project ID for your Watsonx.ai project, which you can find in the Watsonx.ai console.
* **AGENTOPS\_API\_KEY**: From your [AgentOps Dashboard](https://app.agentops.ai/)
Then to set them up, you can either export them as environment variables or set them in a `.env` file.
```bash Export to CLI theme={null}
export IBM_WATSONX_API_KEY="your_ibm_api_key_here"
export IBM_WATSONX_URL="your_ibm_url_here"
export IBM_WATSONX_PROJECT_ID="your_project_id_here"
export AGENTOPS_API_KEY="your_agentops_api_key_here"
```
```txt Set in .env file theme={null}
IBM_WATSONX_API_KEY="your_ibm_api_key_here"
IBM_WATSONX_URL="your_ibm_url_here"
IBM_WATSONX_PROJECT_ID="your_project_id_here"
AGENTOPS_API_KEY="your_agentops_api_key_here"
```
Then load the environment variables in your Python code:
```python theme={null}
from dotenv import load_dotenv
import os
# Load environment variables from .env file
load_dotenv()
# Set up environment variables with fallback values
os.environ["IBM_WATSONX_API_KEY"] = os.getenv("IBM_WATSONX_API_KEY")
os.environ["IBM_WATSONX_URL"] = os.getenv("IBM_WATSONX_URL")
os.environ["IBM_WATSONX_PROJECT_ID"] = os.getenv("IBM_WATSONX_PROJECT_ID")
os.environ["AGENTOPS_API_KEY"] = os.getenv("AGENTOPS_API_KEY")
```
## Usage
Initialize AgentOps at the beginning of your application to automatically track all IBM Watsonx.ai API calls:
```python theme={null}
import agentops
from ibm_watsonx_ai import Credentials
from ibm_watsonx_ai.foundation_models import ModelInference
# Initialize AgentOps
agentops.init(api_key="")
# Initialize credentials
credentials = Credentials(
url=os.getenv("IBM_WATSONX_URL"),
api_key=os.getenv("IBM_WATSONX_API_KEY"),
)
# Project ID
project_id = os.getenv("IBM_WATSONX_PROJECT_ID")
# Create a model instance
model = ModelInference(
model_id="meta-llama/llama-3-3-70b-instruct",
credentials=credentials,
project_id=project_id
)
# Make a completion request
response = model.generate_text("What is artificial intelligence?")
print(f"Generated Text:\n{response}")
# Don't forget to close connection when done
model.close_persistent_connection()
```
## Examples
Basic text generation and chat
Demonstrates streaming responses with Watsonx.ai.
Example of text tokenization with Watsonx.ai models.
## Additional Resources
* [IBM Watsonx.ai Python SDK Documentation](https://ibm.github.io/watsonx-ai-python-sdk/)
* [IBM Watsonx.ai Models](http://ibm.com/products/watsonx-ai/foundation-models)
# LangChain
Source: https://docs.agentops.ai/v2/integrations/langchain
Track your LangChain agents with AgentOps
[LangChain](https://python.langchain.com/docs/tutorials/) is a framework for developing applications powered by language models. AgentOps automatically tracks your LangChain agents by integrating its callback handler.
## Installation
Install AgentOps and the necessary LangChain dependencies:
```bash pip theme={null}
pip install agentops langchain langchain-community langchain-openai python-dotenv
```
```bash poetry theme={null}
poetry add agentops langchain langchain-community langchain-openai python-dotenv
```
```bash uv theme={null}
uv pip install agentops langchain langchain-community langchain-openai python-dotenv
```
## Setting Up API Keys
You'll need API keys for AgentOps and OpenAI (as `ChatOpenAI` is commonly used with LangChain):
* **OPENAI\_API\_KEY**: From the [OpenAI Platform](https://platform.openai.com/api-keys)
* **AGENTOPS\_API\_KEY**: From your [AgentOps Dashboard](https://app.agentops.ai/)
Set these as environment variables or in a `.env` file.
```bash Export to CLI theme={null}
export OPENAI_API_KEY="your_openai_api_key_here"
export AGENTOPS_API_KEY="your_agentops_api_key_here"
```
```txt Set in .env file theme={null}
OPENAI_API_KEY="your_openai_api_key_here"
AGENTOPS_API_KEY="your_agentops_api_key_here"
```
Then load them in your Python code:
```python theme={null}
from dotenv import load_dotenv
import os
load_dotenv()
AGENTOPS_API_KEY = os.getenv("AGENTOPS_API_KEY")
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
```
## Usage
Integrating AgentOps with LangChain involves using the `LangchainCallbackHandler`.
You don't need a separate `agentops.init()` call; the `LangchainCallbackHandler` initializes the AgentOps client automatically if an API key is provided to it or found in the environment.
Here's a basic example:
```python theme={null}
from langchain_community.chat_models import ChatOpenAI
from langchain.agents import initialize_agent, AgentType, Tool # Corrected Tool import
from langchain.tools import DuckDuckGoSearchRun # Example tool
from agentops.integration.callbacks.langchain import LangchainCallbackHandler
# 1. Initialize LangchainCallbackHandler
# AGENTOPS_API_KEY can be passed here or loaded from environment
handler = LangchainCallbackHandler(api_key=AGENTOPS_API_KEY, tags=['LangChain Example'])
# 2. Define tools for the agent
search_tool = DuckDuckGoSearchRun()
tools = [
Tool( # Wrap DuckDuckGoSearchRun in a Tool object
name="DuckDuckGo Search",
func=search_tool.run,
description="Useful for when you need to answer questions about current events or the current state of the world."
)
]
# 3. Configure LLM with the AgentOps handler
# OPENAI_API_KEY can be passed here or loaded from environment
llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY,
callbacks=[handler],
model='gpt-3.5-turbo',
temperature=0) # Added temperature for reproducibility
# 4. Initialize your agent, passing the handler to callbacks
agent = initialize_agent(
tools,
llm,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
callbacks=[handler],
handle_parsing_errors=True
)
# 5. Run your agent
try:
response = agent.run("Who is the current CEO of OpenAI and what is his most recent public statement?")
print(response)
except Exception as e:
print(f"An error occurred: {e}")
```
Visit the [AgentOps Dashboard](https://app.agentops.ai/) to see your session.
## Examples
A detailed notebook demonstrating the LangChain callback handler integration.
# LangGraph
Source: https://docs.agentops.ai/v2/integrations/langgraph
Track and analyze your LangGraph workflows with AgentOps
[LangGraph](https://github.com/langchain-ai/langgraph) is a framework for building stateful, multi-step applications with LLMs as graphs. AgentOps automatically instruments LangGraph to provide comprehensive observability into your graph-based agent workflows.
## Core Concepts
LangGraph enables you to build complex agentic workflows as graphs with:
* **Nodes**: Individual steps in your workflow (agents, tools, functions)
* **Edges**: Connections between nodes that define flow
* **State**: Shared data that flows through the graph
* **Conditional Edges**: Dynamic routing based on state or outputs
* **Cycles**: Support for iterative workflows and feedback loops
## Installation
Install AgentOps and LangGraph along with LangChain dependencies:
```bash pip theme={null}
pip install agentops langgraph langchain-openai python-dotenv
```
```bash poetry theme={null}
poetry add agentops langgraph langchain-openai python-dotenv
```
```bash uv theme={null}
uv pip install agentops langgraph langchain-openai python-dotenv
```
## Setting Up API Keys
You'll need API keys for AgentOps and your LLM provider:
* **OPENAI\_API\_KEY**: From the [OpenAI Platform](https://platform.openai.com/api-keys)
* **AGENTOPS\_API\_KEY**: From your [AgentOps Dashboard](https://app.agentops.ai/)
Set these as environment variables or in a `.env` file.
```bash Export to CLI theme={null}
export OPENAI_API_KEY="your_openai_api_key_here"
export AGENTOPS_API_KEY="your_agentops_api_key_here"
```
```txt Set in .env file theme={null}
OPENAI_API_KEY="your_openai_api_key_here"
AGENTOPS_API_KEY="your_agentops_api_key_here"
```
Then load them in your Python code:
```python theme={null}
from dotenv import load_dotenv
import os
load_dotenv()
AGENTOPS_API_KEY = os.getenv("AGENTOPS_API_KEY")
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
```
## Usage
Initialize AgentOps at the beginning of your application to automatically track all LangGraph operations:
```python theme={null}
import agentops
from typing import Annotated, Literal, TypedDict
from langgraph.graph import StateGraph, END
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI
# Initialize AgentOps
agentops.init()
# Define your graph state
class AgentState(TypedDict):
messages: Annotated[list, add_messages]
# Create your LLM
model = ChatOpenAI(temperature=0)
# Define nodes
def agent_node(state: AgentState):
messages = state["messages"]
response = model.invoke(messages)
return {"messages": [response]}
# Build the graph
workflow = StateGraph(AgentState)
workflow.add_node("agent", agent_node)
workflow.set_entry_point("agent")
workflow.add_edge("agent", END)
# Compile and run
app = workflow.compile()
result = app.invoke({"messages": [{"role": "user", "content": "Hello!"}]})
```
## What Gets Tracked
AgentOps automatically captures:
* **Graph Structure**: Nodes, edges, and entry points during compilation
* **Execution Flow**: The path taken through your graph
* **Node Executions**: Each node execution with inputs and outputs
* **LLM Calls**: All language model interactions within nodes
* **Tool Usage**: Any tools called within your graph
* **State Changes**: How state evolves through the workflow
* **Timing Information**: Duration of each node and total execution time
## Advanced Example
Here's a more complex example with conditional routing and tools:
```python theme={null}
import agentops
from typing import Annotated, Literal, TypedDict
from langgraph.graph import StateGraph, END
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
# Initialize AgentOps
agentops.init()
# Define tools
@tool
def search(query: str) -> str:
"""Search for information."""
return f"Search results for: {query}"
@tool
def calculate(expression: str) -> str:
"""Evaluate a mathematical expression."""
try:
return str(eval(expression))
except:
return "Error in calculation"
# Configure model with tools
tools = [search, calculate]
model = ChatOpenAI(temperature=0).bind_tools(tools)
# Define state
class AgentState(TypedDict):
messages: Annotated[list, add_messages]
# Define conditional logic
def should_continue(state: AgentState) -> Literal["tools", "end"]:
messages = state["messages"]
last_message = messages[-1]
if hasattr(last_message, "tool_calls") and last_message.tool_calls:
return "tools"
return "end"
# Define nodes
def call_model(state: AgentState):
messages = state["messages"]
response = model.invoke(messages)
return {"messages": [response]}
def call_tools(state: AgentState):
messages = state["messages"]
last_message = messages[-1]
tool_responses = []
for tool_call in last_message.tool_calls:
# Execute the appropriate tool
if tool_call["name"] == "search":
result = search.invoke(tool_call["args"])
elif tool_call["name"] == "calculate":
result = calculate.invoke(tool_call["args"])
tool_responses.append({
"role": "tool",
"content": result,
"tool_call_id": tool_call["id"]
})
return {"messages": tool_responses}
# Build the graph
workflow = StateGraph(AgentState)
workflow.add_node("agent", call_model)
workflow.add_node("tools", call_tools)
workflow.set_entry_point("agent")
workflow.add_conditional_edges(
"agent",
should_continue,
{
"tools": "tools",
"end": END
}
)
workflow.add_edge("tools", "agent")
# Compile and run
app = workflow.compile()
result = app.invoke({
"messages": [{"role": "user", "content": "Search for AI news and calculate 25*4"}]
})
```
## Dashboard Insights
In your AgentOps dashboard, you'll see:
1. **Graph Visualization**: Visual representation of your compiled graph
2. **Execution Trace**: Step-by-step flow through nodes
3. **Node Metrics**: Performance data for each node
4. **LLM Analytics**: Token usage and costs across all model calls
5. **Tool Usage**: Which tools were called and their results
6. **Error Tracking**: Any failures in node execution
## Examples
Complete example showing agent workflows with tools
## Best Practices
1. **Initialize Early**: Call `agentops.init()` before creating your graph
2. **Use Descriptive Names**: Name your nodes clearly for better traces
3. **Handle Errors**: Implement error handling in your nodes
4. **Monitor State Size**: Large states can impact performance
5. **Leverage Conditional Edges**: Use them for dynamic workflows
# LiteLLM
Source: https://docs.agentops.ai/v2/integrations/litellm
Track and analyze your LiteLLM calls across multiple providers with AgentOps
AgentOps provides seamless integration with [LiteLLM](https://github.com/BerriAI/litellm), allowing you to automatically track all your LLM API calls across different providers through a unified interface.
## Installation
```bash pip theme={null}
pip install agentops litellm
```
```bash poetry theme={null}
poetry add agentops litellm
```
```bash uv theme={null}
uv pip install agentops litellm
```
## Setting Up API Keys
Before using LiteLLM with AgentOps, you need to set up your API keys. You can obtain:
* **Provider API Keys**: From your chosen LLM provider (OpenAI, Anthropic, Google, etc.)
* **AGENTOPS\_API\_KEY**: From your [AgentOps Dashboard](https://app.agentops.ai/)
Then to set them up, you can either export them as environment variables or set them in a `.env` file.
```bash Export to CLI theme={null}
export OPENAI_API_KEY="your_openai_api_key_here"
export ANTHROPIC_API_KEY="your_anthropic_api_key_here"
export AGENTOPS_API_KEY="your_agentops_api_key_here"
```
```txt Set in .env file theme={null}
OPENAI_API_KEY="your_openai_api_key_here"
ANTHROPIC_API_KEY="your_anthropic_api_key_here"
AGENTOPS_API_KEY="your_agentops_api_key_here"
```
Then load the environment variables in your Python code:
```python theme={null}
from dotenv import load_dotenv
import os
# Load environment variables from .env file
load_dotenv()
# Set up environment variables with fallback values
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")
os.environ["ANTHROPIC_API_KEY"] = os.getenv("ANTHROPIC_API_KEY")
os.environ["AGENTOPS_API_KEY"] = os.getenv("AGENTOPS_API_KEY")
```
## Usage
The simplest way to integrate AgentOps with LiteLLM is to set up the success\_callback.
```python theme={null}
import litellm
from litellm import completion
# Configure LiteLLM to use AgentOps
litellm.success_callback = ["agentops"]
# Make completion requests with LiteLLM
response = completion(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello, how are you?"}]
)
print(response.choices[0].message.content)
```
## Examples
```python Streaming theme={null}
import litellm
from litellm import completion
# Configure LiteLLM to use AgentOps
litellm.success_callback = ["agentops"]
# Make a streaming completion request
response = completion(
model="gpt-4",
messages=[{"role": "user", "content": "Write a short poem about AI."}],
stream=True
)
# Process the streaming response
for chunk in response:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
print() # Add a newline at the end
```
```python Multi-Provider theme={null}
import litellm
from litellm import completion
# Configure LiteLLM to use AgentOps
litellm.success_callback = ["agentops"]
# OpenAI request
openai_response = completion(
model="gpt-4",
messages=[{"role": "user", "content": "What are the advantages of GPT-4?"}]
)
print("OpenAI Response:", openai_response.choices[0].message.content)
# Anthropic request using the same interface
anthropic_response = completion(
model="anthropic/claude-3-opus-20240229",
messages=[{"role": "user", "content": "What are the advantages of Claude?"}]
)
print("Anthropic Response:", anthropic_response.choices[0].message.content)
# All requests across different providers are automatically tracked by AgentOps
```
## More Examples
For more information on integrating AgentOps with LiteLLM, refer to the [LiteLLM documentation on AgentOps integration](https://docs.litellm.ai/docs/observability/agentops_integration).
# LlamaIndex
Source: https://docs.agentops.ai/v2/integrations/llamaindex
AgentOps works seamlessly with LlamaIndex, a framework for building context-augmented generative AI applications with LLMs.
[LlamaIndex](https://www.llamaindex.ai/) is a framework for building context-augmented generative AI applications with LLMs. AgentOps provides comprehensive observability into your LlamaIndex applications through automatic instrumentation, allowing you to monitor LLM calls, track performance, and analyze your application's behavior.
## Installation
Install AgentOps and the LlamaIndex AgentOps instrumentation package:
```bash pip theme={null}
pip install agentops llama-index-instrumentation-agentops
```
```bash poetry theme={null}
poetry add agentops llama-index-instrumentation-agentops
```
```bash uv theme={null}
uv pip install agentops llama-index-instrumentation-agentops
```
## Setting Up API Keys
You'll need an AgentOps API key from your [AgentOps Dashboard](https://app.agentops.ai/):
```bash Export to CLI theme={null}
export AGENTOPS_API_KEY="your_agentops_api_key_here"
```
```txt Set in .env file theme={null}
AGENTOPS_API_KEY="your_agentops_api_key_here"
```
## Usage
Simply set the global handler to "agentops" at the beginning of your LlamaIndex application. AgentOps will automatically instrument LlamaIndex to track your LLM interactions and application performance.
```python theme={null}
from llama_index.core import set_global_handler
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
# Set the global handler to AgentOps
# NOTE: Feel free to set your AgentOps environment variables (e.g., 'AGENTOPS_API_KEY')
# as outlined in the AgentOps documentation, or pass the equivalent keyword arguments
# anticipated by AgentOps' AOClient as **eval_params in set_global_handler.
set_global_handler("agentops")
# Your LlamaIndex application code here
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
# Create a query engine
query_engine = index.as_query_engine()
# Query your data - AgentOps will automatically track this
response = query_engine.query("What is the main topic of these documents?")
print(response)
```
## What Gets Tracked
When you use AgentOps with LlamaIndex, the following operations are automatically tracked:
* **LLM Calls**: All interactions with language models including prompts, completions, and token usage
* **Embeddings**: Vector embedding generation and retrieval operations
* **Query Operations**: Search and retrieval operations on your indexes
* **Performance Metrics**: Response times, token costs, and success/failure rates
## Additional Resources
For more detailed information about LlamaIndex's observability features and AgentOps integration, check out the [LlamaIndex documentation](https://docs.llamaindex.ai/en/stable/module_guides/observability/#agentops).
# Mem0
Source: https://docs.agentops.ai/v2/integrations/mem0
Track and monitor Mem0 memory operations with AgentOps
[Mem0](https://mem0.ai/) provides a smart memory layer for AI applications, enabling personalized interactions by remembering user preferences, conversation history, and context across sessions.
## Why Track Mem0 with AgentOps?
When building memory-powered AI applications, you need visibility into:
* **Memory Operations**: Track when memories are created, updated, or retrieved
* **Search Performance**: Monitor how effectively your AI finds relevant memories
* **Memory Usage Patterns**: Understand what information is being stored and accessed
* **Error Tracking**: Identify issues with memory storage or retrieval
* **Cost Analysis**: Track API calls to both Mem0 and your LLM provider
AgentOps automatically instruments Mem0 to provide complete observability of your memory operations.
## Installation
```bash pip theme={null}
pip install agentops mem0ai python-dotenv
```
```bash poetry theme={null}
poetry add agentops mem0ai python-dotenv
```
```bash uv theme={null}
uv pip install agentops mem0ai python-dotenv
```
## Environment Configuration
Load environment variables and set up API keys. The MEM0\_API\_KEY is only required if you're using the cloud-based MemoryClient.
```bash Export to CLI theme={null}
export AGENTOPS_API_KEY="your_agentops_api_key_here"
export OPENAI_API_KEY="your_openai_api_key_here"
```
```txt Set in .env file theme={null}
AGENTOPS_API_KEY="your_agentops_api_key_here"
OPENAI_API_KEY="your_openai_api_key_here"
```
## Tracking Memory Operations
```python Local Memory theme={null}
import agentops
from mem0 import Memory
# Start a trace to group related operations
agentops.start_trace("user_preference_learning",tags=["mem0_memory_example"])
try:
# Initialize Memory - AgentOps tracks the configuration
memory = Memory.from_config({
"llm": {
"provider": "openai",
"config": {
"model": "gpt-4o-mini",
"temperature": 0.1
}
}
})
# Add memories - AgentOps tracks each operation
memory.add(
"I prefer morning meetings and dark roast coffee",
user_id="user_123",
metadata={"category": "preferences"}
)
# Search memories - AgentOps tracks search queries and results
results = memory.search(
"What are the user's meeting preferences?",
user_id="user_123"
)
# End trace - AgentOps aggregates all operations
agentops.end_trace(end_state="success")
except Exception as e:
agentops.end_trace(end_state="error")
```
```python Cloud Memory theme={null}
import agentops
from mem0 import MemoryClient
# Start trace for cloud operations
agentops.start_trace("cloud_memory_sync",tags=["mem0_memoryclient_example"])
try:
# Initialize MemoryClient - AgentOps tracks API authentication
client = MemoryClient(api_key="your_mem0_api_key")
# Batch add memories - AgentOps tracks bulk operations
messages = [
{"role": "user", "content": "I work in software engineering"},
{"role": "user", "content": "I prefer Python over Java"},
]
client.add(messages, user_id="user_123")
# Search with filters - AgentOps tracks complex queries
filters = {"AND": [{"user_id": "user_123"}]}
results = client.search(
query="What programming languages does the user know?",
filters=filters,
version="v2"
)
# End trace - AgentOps aggregates all operations
agentops.end_trace(end_state="success")
except Exception as e:
agentops.end_trace(end_state="error")
```
## What You'll See in AgentOps
When using Mem0 with AgentOps, your dashboard will show:
1. **Memory Operation Timeline**: Visual flow of all memory operations
2. **Search Analytics**: Query patterns and retrieval effectiveness
3. **Memory Growth**: Track how user memories accumulate over time
4. **Performance Metrics**: Latency for adds, searches, and retrievals
5. **Error Tracking**: Failed operations with full error context
6. **Cost Attribution**: Token usage for memory extraction and searches
## Examples
Simple example showing memory storage and retrieval with AgentOps tracking
Track concurrent memory operations with async/await patterns
# Memori
Source: https://docs.agentops.ai/v2/integrations/memori
Track and monitor Memori memory operations with AgentOps
[Memori](https://github.com/GibsonAI/memori) provides automatic short-term and long-term memory for AI applications and agents, seamlessly recording conversations and adding context to LLM interactions without requiring explicit memory management.
## Why Track Memori with AgentOps?
* **Memory Recording**: Track when conversations are automatically captured and stored
* **Context Injection**: Monitor how memory is automatically added to LLM context
* **Conversation Flow**: Understand the complete dialogue history across sessions
* **Memory Effectiveness**: Analyze how historical context improves response quality
* **Performance Impact**: Track latency and token usage from memory operations
* **Error Tracking**: Identify issues with memory recording or context retrieval
AgentOps automatically instruments Memori to provide complete observability of your memory operations.
## Installation
```bash pip theme={null}
pip install agentops memorisdk openai python-dotenv
```
```bash poetry theme={null}
poetry add agentops memorisdk openai python-dotenv
```
```bash uv theme={null}
uv pip install agentops memorisdk openai python-dotenv
```
## Environment Configuration
Load environment variables and set up API keys.
```bash Export to CLI theme={null}
export AGENTOPS_API_KEY="your_agentops_api_key_here"
export OPENAI_API_KEY="your_openai_api_key_here"
```
```txt Set in .env file theme={null}
AGENTOPS_API_KEY="your_agentops_api_key_here"
OPENAI_API_KEY="your_openai_api_key_here"
```
## Tracking Automatic Memory Operations
```python Basic Memory Tracking theme={null}
import agentops
from memori import Memori
from openai import OpenAI
# Start a trace to group related operations
agentops.start_trace("memori_conversation_flow", tags=["memori_memory_example"])
try:
# Initialize OpenAI client
openai_client = OpenAI()
# Initialize Memori with conscious ingestion enabled
# AgentOps tracks the memory configuration
memori = Memori(
database_connect="sqlite:///agentops_example.db",
conscious_ingest=True,
auto_ingest=True,
)
memori.enable()
# First conversation - AgentOps tracks LLM call and memory recording
response1 = openai_client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "user", "content": "I'm working on a Python FastAPI project"}
],
)
print("Assistant:", response1.choices[0].message.content)
# Second conversation - AgentOps tracks memory retrieval and context injection
response2 = openai_client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Help me add user authentication"}],
)
print("Assistant:", response2.choices[0].message.content)
print("💡 Notice: Memori automatically provided FastAPI project context!")
# End trace - AgentOps aggregates all operations
agentops.end_trace(end_state="success")
except Exception as e:
agentops.end_trace(end_state="error")
```
## What You'll See in AgentOps
When using Memori with AgentOps, your dashboard will show:
1. **Conversation Timeline**: Complete flow of all conversations with memory context
2. **Memory Injection Analytics**: Track when and how much context is automatically added
3. **Context Relevance**: Monitor the effectiveness of automatic memory retrieval
4. **Performance Metrics**: Latency impact of memory operations on LLM calls
5. **Token Usage**: Track additional tokens consumed by memory context
6. **Memory Growth**: Visualize how conversation history accumulates over time
7. **Error Tracking**: Failed memory operations with full error context
## Key Benefits of Memori + AgentOps
* **Zero-Effort Memory**: Memori automatically handles conversation recording
* **Intelligent Context**: Only relevant memory is injected into LLM context
* **Complete Visibility**: AgentOps tracks all automatic memory operations
* **Performance Monitoring**: Understand the cost/benefit of automatic memory
* **Debugging Support**: Full traceability of memory decisions and context injection
# OpenAI
Source: https://docs.agentops.ai/v2/integrations/openai
Track and analyze your OpenAI API calls with AgentOps
AgentOps seamlessly integrates with [OpenAI's Python SDK](https://github.com/openai/openai-python), allowing you to track and analyze all your OpenAI API calls automatically.
## Installation
```bash pip theme={null}
pip install agentops openai
```
```bash poetry theme={null}
poetry add agentops openai
```
```bash uv theme={null}
uv pip install agentops openai
```
## Setting Up API Keys
Before using OpenAI with AgentOps, you need to set up your API keys. You can obtain:
* **OPENAI\_API\_KEY**: From the [OpenAI Platform](https://platform.openai.com/api-keys)
* **AGENTOPS\_API\_KEY**: From your [AgentOps Dashboard](https://app.agentops.ai/)
Then to set them up, you can either export them as environment variables or set them in a `.env` file.
```bash Export to CLI theme={null}
export OPENAI_API_KEY="your_openai_api_key_here"
export AGENTOPS_API_KEY="your_agentops_api_key_here"
```
```txt Set in .env file theme={null}
OPENAI_API_KEY="your_openai_api_key_here"
AGENTOPS_API_KEY="your_agentops_api_key_here"
```
Then load the environment variables in your Python code:
```python theme={null}
from dotenv import load_dotenv
import os
# Load environment variables from .env file
load_dotenv()
# Set up environment variables with fallback values
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")
os.environ["AGENTOPS_API_KEY"] = os.getenv("AGENTOPS_API_KEY")
```
## Usage
Initialize AgentOps at the beginning of your application to automatically track all OpenAI API calls:
```python theme={null}
import agentops
from openai import OpenAI
# Initialize AgentOps
agentops.init()
# Create OpenAI client
client = OpenAI()
# Make API calls as usual - AgentOps will track them automatically
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
]
)
print(response.choices[0].message.content)
```
## Examples
```python Streaming theme={null}
import agentops
from openai import OpenAI
# Initialize AgentOps
agentops.init()
# Create OpenAI client
client = OpenAI()
# Make a streaming API call
stream = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Write a short poem about AI."}
],
stream=True
)
# Process the streaming response
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
```
```python Function Calling theme={null}
import json
import agentops
from openai import OpenAI
# Initialize AgentOps
agentops.init()
# Create OpenAI client
client = OpenAI()
# Define tools
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
}
},
"required": ["location"],
},
},
}
]
# Function implementation
def get_weather(location):
return json.dumps({"location": location, "temperature": "72", "unit": "fahrenheit", "forecast": ["sunny", "windy"]})
# Make a function call API request
messages = [
{"role": "system", "content": "You are a helpful weather assistant."},
{"role": "user", "content": "What's the weather like in Boston?"}
]
response = client.chat.completions.create(
model="gpt-4",
messages=messages,
tools=tools,
tool_choice="auto",
)
# Process response
response_message = response.choices[0].message
messages.append({"role": "assistant", "content": response_message.content, "tool_calls": response_message.tool_calls})
if response_message.tool_calls:
# Process each tool call
for tool_call in response_message.tool_calls:
function_name = tool_call.function.name
function_args = json.loads(tool_call.function.arguments)
if function_name == "get_weather":
function_response = get_weather(function_args.get("location"))
# Add tool response to messages
messages.append(
{
"role": "tool",
"tool_call_id": tool_call.id,
"name": function_name,
"content": function_response,
}
)
# Get a new response from the model
second_response = client.chat.completions.create(
model="gpt-4",
messages=messages,
)
print(second_response.choices[0].message.content)
else:
print(response_message.content)
```
## More Examples
Advanced multi-tool RAG example
Demonstrates asynchronous calls with the OpenAI SDK.
Shows synchronous calls with the OpenAI SDK.
Example of integrating web search capabilities.
# OpenAI Agents JS
Source: https://docs.agentops.ai/v2/integrations/openai_agents_js
AgentOps integration with the OpenAI Agents SDK for TypeScript/JavaScript.
[OpenAI Agents JS](https://github.com/openai/openai-agents-js) is a lightweight yet powerful SDK for building multi-agent workflows in TypeScript. AgentOps seamlessly integrates to provide observability into these workflows.
* [OpenAI Agents JS documentation](https://openai.github.io/openai-agents-js)
* [Python guide](/v2/integrations/openai_agents_python)
## Installation
```bash theme={null}
npm install agentops @openai/agents
```
## Usage
```typescript theme={null}
import { agentops } from 'agentops';
import { Agent, run } from '@openai/agents';
await agentops.init();
const agent = new Agent({
name: 'Assistant',
instructions: 'You are a helpful assistant.'
});
const result = await run(agent, 'Hello, world!');
console.log(result.finalOutput);
```
# OpenAI Agents SDK
Source: https://docs.agentops.ai/v2/integrations/openai_agents_python
AgentOps and OpenAI Agents SDK integration for powerful multi-agent workflow monitoring.
## Video Tutorial
[OpenAI Agents Python](https://github.com/openai/openai-agents-python) is a lightweight yet powerful SDK for building multi-agent workflows in Python. AgentOps seamlessly integrates to provide observability into these workflows.
* [OpenAI Agents Python documentation](https://openai.github.io/openai-agents-python/)
* [TypeScript guide](/v2/integrations/openai_agents_js)
## Core Concepts
* **Agents**: LLMs configured with instructions, tools, guardrails, and handoffs
* **Handoffs**: Allow agents to transfer control to other agents for specific tasks
* **Guardrails**: Configurable safety checks for input and output validation
* **Tracing**: Built-in tracking of agent runs, allowing you to view, debug and optimize your workflows
## Python
### Installation
Install AgentOps, the OpenAI Agents SDK, and `python-dotenv` for managing API keys:
```bash pip theme={null}
pip install agentops openai-agents python-dotenv
```
```bash poetry theme={null}
poetry add agentops openai-agents python-dotenv
```
```bash uv theme={null}
uv pip install agentops openai-agents python-dotenv
```
### Setting Up API Keys
Before using the OpenAI Agents SDK with AgentOps, you need to set up your API keys:
* **OPENAI\_API\_KEY**: From the [OpenAI Platform](https://platform.openai.com/api-keys)
* **AGENTOPS\_API\_KEY**: From your [AgentOps Dashboard](https://app.agentops.ai/)
You can set these as environment variables or in a `.env` file.
```bash Export to CLI theme={null}
export OPENAI_API_KEY="your_openai_api_key_here"
export AGENTOPS_API_KEY="your_agentops_api_key_here"
```
```txt Set in .env file theme={null}
OPENAI_API_KEY="your_openai_api_key_here"
AGENTOPS_API_KEY="your_agentops_api_key_here"
```
Then load them in your Python code:
```python theme={null}
from dotenv import load_dotenv
import os
load_dotenv()
AGENTOPS_API_KEY = os.getenv("AGENTOPS_API_KEY")
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
```
### Usage
AgentOps will automatically instrument the OpenAI Agents SDK after being initialized. You can then create agents, run them, and track their interactions.
```python theme={null}
import agentops
from agents import Agent, Runner
# Initialize AgentOps
agentops.init()
# Create an agent with instructions
agent = Agent(name="Assistant", instructions="You are a helpful assistant")
result = Runner.run_sync(agent, "Write a haiku about recursion in programming.")
print(result.final_output)
```
## Examples
```python Handoffs theme={null}
from agents import Agent, Runner
import asyncio
import agentops
import os
agentops.init()
spanish_agent = Agent(
name="Spanish agent",
instructions="You only speak Spanish.",
)
english_agent = Agent(
name="English agent",
instructions="You only speak English",
)
triage_agent = Agent(
name="Triage agent",
instructions="Handoff to the appropriate agent based on the language of the request.",
handoffs=[spanish_agent, english_agent],
)
async def main():
result = await Runner.run(triage_agent, input="Hola, ¿cómo estás?")
print(result.final_output)
# Expected Output: ¡Hola! Estoy bien, gracias por preguntar. ¿Y tú, cómo estás?
if __name__ == "__main__":
asyncio.run(main())
```
```python Function Calling theme={null}
import asyncio
from agents import Agent, Runner, function_tool
import agentops
import os
agentops.init()
@function_tool
def get_weather(city: str) -> str:
return f"The weather in {city} is sunny."
agent = Agent(
name="Weather Agent",
instructions="You are a helpful agent that can get weather information.",
tools=[get_weather],
)
async def main():
result = await Runner.run(agent, input="What's the weather in Tokyo?")
print(result.final_output)
# Expected Output: The weather in Tokyo is sunny.
if __name__ == "__main__":
asyncio.run(main())
```
## More Examples
Demonstrates a customer service workflow
Illustrates various agent interaction patterns.
Showcases agents utilizing different tools.
# Smolagents
Source: https://docs.agentops.ai/v2/integrations/smolagents
Track and analyze your Smolagents AI agents with AgentOps
AgentOps provides seamless integration with [Smolagents](https://github.com/huggingface/smolagents), HuggingFace's lightweight framework for building AI agents. Monitor your agent workflows, tool usage, and execution traces automatically.
## Core Concepts
Smolagents is designed around several key concepts:
* **Agents**: AI assistants that can use tools and reason through problems
* **Tools**: Functions that agents can call to interact with external systems
* **Models**: LLM backends that power agent reasoning (supports various providers via LiteLLM)
* **Code Execution**: Agents can write and execute Python code in sandboxed environments
* **Multi-Agent Systems**: Orchestrate multiple specialized agents working together
## Installation
Install AgentOps and Smolagents, along with any additional dependencies:
```bash pip theme={null}
pip install agentops smolagents python-dotenv
```
```bash poetry theme={null}
poetry add agentops smolagents python-dotenv
```
```bash uv theme={null}
uv pip install agentops smolagents python-dotenv
```
## Setting Up API Keys
Before using Smolagents with AgentOps, you need to set up your API keys:
* **AGENTOPS\_API\_KEY**: From your [AgentOps Dashboard](https://app.agentops.ai/)
* **LLM API Keys**: Depending on your chosen model provider (e.g., OPENAI\_API\_KEY, ANTHROPIC\_API\_KEY)
Set these as environment variables or in a `.env` file.
```bash Export to CLI theme={null}
export AGENTOPS_API_KEY="your_agentops_api_key_here"
export OPENAI_API_KEY="your_openai_api_key_here"
```
```txt Set in .env file theme={null}
AGENTOPS_API_KEY="your_agentops_api_key_here"
OPENAI_API_KEY="your_openai_api_key_here"
```
Then load them in your Python code:
```python theme={null}
from dotenv import load_dotenv
import os
load_dotenv()
AGENTOPS_API_KEY = os.getenv("AGENTOPS_API_KEY")
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
```
## Usage
Initialize AgentOps before creating your Smolagents to automatically track all agent interactions:
```python theme={null}
import agentops
from smolagents import LiteLLMModel, ToolCallingAgent, DuckDuckGoSearchTool
# Initialize AgentOps
agentops.init()
# Create a model (supports various providers via LiteLLM)
model = LiteLLMModel("openai/gpt-4o-mini")
# Create an agent with tools
agent = ToolCallingAgent(
tools=[DuckDuckGoSearchTool()],
model=model,
)
# Run the agent
result = agent.run("What are the latest developments in AI safety research?")
print(result)
```
## Examples
```python Simple Math Agent theme={null}
import agentops
from smolagents import LiteLLMModel, CodeAgent
# Initialize AgentOps
agentops.init()
# Create a model
model = LiteLLMModel("openai/gpt-4o-mini")
# Create a code agent that can perform calculations
agent = CodeAgent(
tools=[], # No external tools needed for math
model=model,
additional_authorized_imports=["math", "numpy"],
)
# Ask the agent to solve a math problem
result = agent.run(
"Calculate the compound interest on $10,000 invested at 5% annual rate "
"for 10 years, compounded monthly. Show your work."
)
print(result)
```
```python Research Agent with Tools theme={null}
import agentops
from smolagents import (
LiteLLMModel,
ToolCallingAgent,
DuckDuckGoSearchTool,
tool
)
# Initialize AgentOps
agentops.init()
# Create a custom tool
@tool
def word_counter(text: str) -> str:
"""
Counts the number of words in a given text.
Args:
text: The text to count words in.
Returns:
A string with the word count.
"""
word_count = len(text.split())
return f"The text contains {word_count} words."
# Create model and agent
model = LiteLLMModel("openai/gpt-4o-mini")
agent = ToolCallingAgent(
tools=[DuckDuckGoSearchTool(), word_counter],
model=model,
)
# Run a research task
result = agent.run(
"Search for information about the James Webb Space Telescope's latest discoveries. "
"Then count how many words are in your summary."
)
print(result)
```
```python Multi-Step Task Agent theme={null}
import agentops
from smolagents import LiteLLMModel, CodeAgent, tool
import json
# Initialize AgentOps
agentops.init()
# Create tools for data processing
@tool
def save_json(data: dict, filename: str) -> str:
"""
Saves data to a JSON file.
Args:
data: Dictionary to save
filename: Name of the file to save to
Returns:
Success message
"""
with open(filename, 'w') as f:
json.dump(data, f, indent=2)
return f"Data saved to {filename}"
@tool
def load_json(filename: str) -> dict:
"""
Loads data from a JSON file.
Args:
filename: Name of the file to load from
Returns:
The loaded data as a dictionary
"""
with open(filename, 'r') as f:
return json.load(f)
# Create agent
model = LiteLLMModel("openai/gpt-4o-mini")
agent = CodeAgent(
tools=[save_json, load_json],
model=model,
additional_authorized_imports=["pandas", "datetime"],
)
# Run a multi-step data processing task
result = agent.run("""
1. Create a dataset of 5 fictional employees with names, departments, and salaries
2. Save this data to 'employees.json'
3. Load the data back and calculate the average salary
4. Find the highest paid employee
5. Return a summary of your findings
""")
print(result)
```
## More Examples
Complex multi-agent web browsing system
Convert natural language queries to SQL
Visit your [AgentOps Dashboard](https://app.agentops.ai) to see detailed traces of your Smolagents executions, tool usage, and agent reasoning steps.
# xAI (Grok)
Source: https://docs.agentops.ai/v2/integrations/xai
Track and analyze your xAI Grok API calls with AgentOps
AgentOps can track Grok. Grok is this true?
## Installation
```bash pip theme={null}
pip install agentops openai
```
````bash poetry theme={null}
poetry add agentops openai
``
```bash uv
uv pip install agentops openai
````
## Setting Up API Keys
Before using xAI with AgentOps, you need to set up your API keys. You can obtain:
* **XAI\_API\_KEY**: From the [xAI Developer Platform](https://console.x.ai/)
* **AGENTOPS\_API\_KEY**: From your [AgentOps Dashboard](https://app.agentops.ai/)
Then to set them up, you can either export them as environment variables or set them in a `.env` file.
```bash Export to CLI theme={null}
export XAI_API_KEY="your_xai_api_key_here"
export AGENTOPS_API_KEY="your_agentops_api_key_here"
```
```txt Set in .env file theme={null}
XAI_API_KEY="your_xai_api_key_here"
AGENTOPS_API_KEY="your_agentops_api_key_here"
```
Then load the environment variables in your Python code:
```python theme={null}
from dotenv import load_dotenv
import os
# Load environment variables from .env file
load_dotenv()
# Set up environment variables with fallback values
os.environ["XAI_API_KEY"] = os.getenv("XAI_API_KEY")
os.environ["AGENTOPS_API_KEY"] = os.getenv("AGENTOPS_API_KEY")
```
## Usage
Initialize AgentOps at the beginning of your application. Then, use the OpenAI SDK with xAI's base URL to interact with Grok. AgentOps will automatically track all API calls.
```python Simple Chat theme={null}
import os
import agentops
from openai import OpenAI
# Initialize AgentOps
agentops.init()
# Create OpenAI client configured for xAI
client = OpenAI(
api_key=os.getenv("XAI_API_KEY"),
base_url="https://api.x.ai/v1",
)
# Basic chat completion
completion = client.chat.completions.create(
model="grok-3-latest",
messages=[
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Explain the concept of AI observability in simple terms."},
],
)
print(completion.choices[0].message.content)
```
```python Streaming Chat theme={null}
import os
import agentops
from openai import OpenAI
# Initialize AgentOps
agentops.init()
# Create OpenAI client configured for xAI
client = OpenAI(
api_key=os.getenv("XAI_API_KEY"),
base_url="https://api.x.ai/v1",
)
# Streaming chat completion
stream = client.chat.completions.create(
model="grok-3-latest",
messages=[
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Tell me about the latest developments in AI."},
],
stream=True,
)
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
```
## Examples
Basic usage patterns for Grok LLM
Demonstrates using Grok with vision capabilities.
# Xpander
Source: https://docs.agentops.ai/v2/integrations/xpander
Monitor and analyze your Xpander agent workflows with automatic AgentOps instrumentation
[Xpander](https://xpander.ai/) is a powerful platform for building and deploying AI agents with sophisticated workflow management capabilities. AgentOps provides seamless integration with the Xpander SDK, automatically instrumenting all agent activities, tool executions, and LLM interactions without any manual setup.
## Installation
Install AgentOps and the Xpander SDK, along with the required dependencies:
```bash pip theme={null}
pip install agentops xpander-sdk xpander-utils openai python-dotenv loguru
```
```bash poetry theme={null}
poetry add agentops xpander-sdk xpander-utils openai python-dotenv loguru
```
```bash uv theme={null}
uv add agentops xpander-sdk xpander-utils openai python-dotenv loguru
```
## Setting Up API Keys
You'll need API keys for AgentOps, Xpander, and OpenAI:
* **AGENTOPS\_API\_KEY**: From your [AgentOps Dashboard](https://app.agentops.ai/)
* **XPANDER\_API\_KEY**: From your [Xpander Dashboard](https://app.xpander.ai/)
* **XPANDER\_AGENT\_ID**: The ID of your Xpander agent
* **OPENAI\_API\_KEY**: From the [OpenAI Platform](https://platform.openai.com/api-keys)
Set these as environment variables or in a `.env` file:
```bash Export to CLI theme={null}
export AGENTOPS_API_KEY="your_agentops_api_key_here"
export XPANDER_API_KEY="your_xpander_api_key_here"
export XPANDER_AGENT_ID="your_xpander_agent_id_here"
export OPENAI_API_KEY="your_openai_api_key_here"
```
```txt Set in .env file theme={null}
AGENTOPS_API_KEY="your_agentops_api_key_here"
XPANDER_API_KEY="your_xpander_api_key_here"
XPANDER_AGENT_ID="your_xpander_agent_id_here"
OPENAI_API_KEY="your_openai_api_key_here"
```
You can also store your configuration in a `xpander_config.json` file:
```json theme={null}
{
"api_key": "your_xpander_api_key_here",
"agent_id": "your_xpander_agent_id_here"
}
```
## Quick Start
The key to AgentOps + Xpander integration is **initialization order**: Initialize AgentOps **before** importing the Xpander SDK to enable automatic instrumentation.
The following example shows the callback-based integration pattern. For a complete working example, see our [Xpander example](/v2/examples/xpander).
```python theme={null}
# ruff: noqa: E402
import os
import json
import asyncio
from pathlib import Path
from dotenv import load_dotenv
# Load environment variables first
load_dotenv()
# 1. Initialize AgentOps FIRST (this enables auto-instrumentation)
import agentops
agentops.init(
api_key=os.getenv("AGENTOPS_API_KEY"),
trace_name="my-xpander-coding-agent-callbacks",
default_tags=["xpander", "coding-agent", "callbacks"],
)
# 2. Now import Xpander SDK (instrumentation will automatically activate)
from xpander_sdk import XpanderClient, LLMProvider, LLMTokens, Tokens, Agent, ExecutionStatus
from xpander_utils.events import XpanderEventListener, AgentExecutionResult, AgentExecution
from openai import AsyncOpenAI
class MyAgent:
def __init__(self):
# Load config
config_path = Path(__file__).parent / "xpander_config.json"
config = json.loads(config_path.read_text())
# Get API keys
xpander_key = config.get("api_key") or os.getenv("XPANDER_API_KEY")
agent_id = config.get("agent_id") or os.getenv("XPANDER_AGENT_ID")
openai_key = os.getenv("OPENAI_API_KEY")
# Initialize clients
self.openai = AsyncOpenAI(api_key=openai_key)
xpander_client = XpanderClient(api_key=xpander_key)
self.agent_backend: Agent = xpander_client.agents.get(agent_id=agent_id)
self.agent_backend.select_llm_provider(LLMProvider.OPEN_AI)
async def run(self, user_input: str) -> dict:
tokens = Tokens(worker=LLMTokens(0, 0, 0))
while not self.agent_backend.is_finished():
# Call LLM
response = await self.openai.chat.completions.create(
model="gpt-4",
messages=self.agent_backend.messages,
tools=self.agent_backend.get_tools(),
tool_choice=self.agent_backend.tool_choice,
temperature=0,
)
# Track tokens
if hasattr(response, "usage"):
tokens.worker.prompt_tokens += response.usage.prompt_tokens
tokens.worker.completion_tokens += response.usage.completion_tokens
tokens.worker.total_tokens += response.usage.total_tokens
# Add response to agent context
self.agent_backend.add_messages(response.model_dump())
self.agent_backend.report_execution_metrics(llm_tokens=tokens, ai_model="gpt-4")
# Execute any tool calls
tool_calls = self.agent_backend.extract_tool_calls(response.model_dump())
if tool_calls:
tool_results = await asyncio.to_thread(self.agent_backend.run_tools, tool_calls)
result = self.agent_backend.retrieve_execution_result()
return {"result": result.result, "thread_id": result.memory_thread_id}
# Set up event listener with callback handlers
listener = XpanderEventListener(
api_key=os.getenv("XPANDER_API_KEY"),
agent_id=os.getenv("XPANDER_AGENT_ID")
)
async def on_execution_request(execution_task: AgentExecution) -> AgentExecutionResult:
agent = MyAgent()
agent.agent_backend.init_task(execution=execution_task.model_dump())
try:
await agent.run(execution_task.input.text)
execution_result = agent.agent_backend.retrieve_execution_result()
return AgentExecutionResult(
result=execution_result.result,
is_success=execution_result.status == ExecutionStatus.COMPLETED,
)
except Exception as e:
print(f"Error: {e}")
raise
# Register the callback
listener.register(on_execution_request=on_execution_request)
```
## What's Automatically Tracked
AgentOps automatically captures comprehensive telemetry from your Xpander agents:
### 🤖 Agent Activities
* Agent initialization and configuration
* Task lifecycle (start, execution steps, completion)
* Workflow phase transitions (planning → executing → finished)
* Session management and context persistence
### 🧠 LLM Interactions
* All OpenAI API calls with full request/response data
* Token usage and cost tracking across models
* Conversation history and context management
* Model parameters and settings
### 🛠️ Tool Executions
* Tool call detection with parameters and arguments
* Tool execution results and success/failure status
* Tool performance metrics and timing
* Tool call hierarchies and dependencies
### 📊 Performance Metrics
* End-to-end execution duration and timing
* Step-by-step workflow progression
* Resource utilization and efficiency metrics
* Error handling and exception tracking
## Key Features
### ✅ Zero-Configuration Setup
No manual trace creation or span management required. Simply initialize AgentOps before importing Xpander SDK.
### ✅ Complete Workflow Visibility
Track the entire agent execution flow from task initiation to completion, including all intermediate steps.
### ✅ Real-time Monitoring
View your agent activities in real-time on the AgentOps dashboard as they execute.
### ✅ Tool Execution Insights
Monitor which tools are being called, their parameters, execution time, and results.
### ✅ Cost Tracking
Automatic token usage tracking for all LLM interactions with cost analysis.
## Callback Handler Pattern
The Xpander integration supports two main patterns:
1. **Direct Integration**: Directly instrument your agent code (shown above)
2. **Callback Handler**: Use XpanderEventListener for webhook-style integration
The callback handler pattern is particularly useful for:
* Production deployments with centralized monitoring
* Multi-agent orchestration systems
* Event-driven architectures
## Runtime-Specific Instrumentation
Xpander SDK uses JSII to create methods at runtime, which requires specialized instrumentation. AgentOps handles this automatically by:
* **Method Wrapping**: Dynamically wrapping agent methods as they're created
* **Context Persistence**: Maintaining session context across runtime object lifecycle
* **Agent Detection**: Automatically detecting and instrumenting new agent instances
* **Tool Result Extraction**: Properly extracting results from JSII object references
## Troubleshooting
### Import Order Issues
If you're not seeing traces, ensure AgentOps is initialized before importing Xpander SDK:
```python theme={null}
# ✅ Correct order
import agentops
agentops.init()
from xpander_sdk import XpanderClient
# ❌ Incorrect order
from xpander_sdk import XpanderClient
import agentops
agentops.init() # Too late - instrumentation won't activate
```
### Missing Tool Results
If tool results show `{"__jsii_ref__": "..."}` instead of actual content, ensure you're using the latest version of AgentOps, which includes improved JSII object handling.
### Import Errors (E402)
If you see linting errors about imports not being at the top of the file, this is expected for Xpander integration. Add `# ruff: noqa: E402` at the top of your file to suppress these warnings, as the import order is required for proper instrumentation.
## Examples
Complete single-file implementation with callback handlers
View the complete source code and configuration files
# Introduction
Source: https://docs.agentops.ai/v2/introduction
AgentOps is the developer favorite platform for testing, debugging, and deploying AI agents and LLM apps.
Prefer asking your IDE? Install the Mintlify MCP Docs Server for AgentOps to chat with the docs while you code:
`npx mint-mcp add agentops`
The AgentOps app is open source. Browse the code or contribute in our GitHub app directory.
## Integrate with developer favorite LLM providers and agent frameworks
### Agent Frameworks
} href="/v2/integrations/ag2" />
} href="/v2/integrations/agno" />
} href="/v2/integrations/autogen" />
} href="/v2/integrations/crewai" />
} href="/v2/integrations/google_adk" />
} href="/v2/integrations/haystack" />
} href="/v2/integrations/langchain" />
} href="/v2/integrations/openai_agents_python" />
} href="/v2/integrations/openai_agents_js" />
} href="/v2/integrations/smolagents" />
### LLM Providers
} href="/v2/integrations/anthropic" />
} href="/v2/integrations/google_generative_ai" />
} href="/v2/integrations/openai" />
} href="/v2/integrations/litellm" />
} href="/v2/integrations/ibm_watsonx_ai" />
} href="/v2/integrations/xai" />
} href="/v2/integrations/mem0" />
} href="/v2/integrations/memori" />
Observability and monitoring for your AI agents and LLM apps. And we do it all in just two lines of code...
```python python theme={null}
import agentops
agentops.init()
```
... that logs everything back to your AgentOps Dashboard.
AgentOps is also available for TypeScript/JavaScript applications. Check out our [TypeScript SDK guide](/v2/usage/typescript-sdk) for Node.js projects.
That's it! AgentOps will automatically instrument your code and start tracking traces.
Need more control? You can create custom traces using the `@trace` decorator (recommended) or manage traces manually for advanced use cases:
```python python theme={null}
import agentops
from agentops.sdk.decorators import trace
agentops.init(, auto_start_session=False)
@trace(name="my-workflow", tags=["production"])
def my_workflow():
# Your code here
return "Workflow completed"
```
You can also set a custom trace name during initialization:
```python python theme={null}
import agentops
agentops.init(, trace_name="custom-trace-name")
```
## The AgentOps Dashboard
[Give us a star](https://github.com/AgentOps-AI/agentops) to bookmark on GitHub, save for later 🖇️)
With just two lines of code, you can free yourself from the chains of the terminal and, instead, visualize your agents' behavior
in your AgentOps Dashboard. After setting up AgentOps, each execution of your program is recorded as a session and the above
data is automatically recorded for you.
The examples below were captured with two lines of code.
### Session Drilldown
Here you will find a list of all of your previously recorded sessions and useful data about each such as total execution time.
You also get helpful debugging info such as any SDK versions you were on if you're building on a supported agent framework like Crew or AutoGen.
LLM calls are presented as a familiar chat history view, and charts give you a breakdown of the types of events that were called and how long they took.
Find any past sessions from your Session Drawer.
Most powerful of all is the Session Waterfall. On the left, a time visualization of all your LLM calls, Action events, Tool calls, and Errors.
On the right, specific details about the event you've selected on the waterfall. For instance the exact prompt and completion for a given LLM call.
Most of which has been automatically recorded for you.
### Session Overview
View a meta-analysis of all of your sessions in a single view.
# Quickstart
Source: https://docs.agentops.ai/v2/quickstart
Get started with AgentOps in minutes with just 2 lines of code for basic monitoring, and explore powerful decorators for custom tracing.
AgentOps is designed for easy integration into your AI agent projects, providing powerful observability with minimal setup. This guide will get you started quickly.
[Give us a star on GitHub!](https://github.com/AgentOps-AI/agentops) Your support helps us grow. ⭐
The AgentOps app is open source—explore the code in our GitHub app directory.
Prefer asking your IDE? Install the Mintlify MCP Docs Server for AgentOps to chat with the docs while you code:
`npx mint-mcp add agentops`
## Installation
First, install the AgentOps SDK. We recommend including `python-dotenv` for easy API key management.
```bash pip theme={null}
pip install agentops python-dotenv
```
```bash poetry theme={null}
poetry add agentops python-dotenv
```
```bash uv theme={null}
uv pip install agentops python-dotenv
```
## Initial Setup (2 Lines of Code)
At its simplest, AgentOps can start monitoring your supported LLM and agent framework calls with just two lines of Python code.
1. **Import AgentOps**: Add `import agentops` to your script.
2. **Initialize AgentOps**: Call `agentops.init()` with your API key.
```python Python theme={null}
import agentops
import os
from dotenv import load_dotenv
# Load environment variables (recommended for API keys)
load_dotenv()
# Initialize AgentOps
# The API key can be passed directly or set as an environment variable AGENTOPS_API_KEY
AGENTOPS_API_KEY = os.getenv("AGENTOPS_API_KEY")
agentops.init(AGENTOPS_API_KEY)
# That's it for basic auto-instrumentation!
# If you're using a supported library (like OpenAI, LangChain, CrewAI, etc.),
# AgentOps will now automatically track LLM calls and agent actions.
```
### Setting Your AgentOps API Key
You need an AgentOps API key to send data to your dashboard.
* Get your API key from the [AgentOps Dashboard](https://app.agentops.ai/settings/projects).
It's best practice to set your API key as an environment variable.
```bash Export to CLI theme={null}
export AGENTOPS_API_KEY="your_agentops_api_key_here"
```
```txt Set in .env file theme={null}
AGENTOPS_API_KEY="your_agentops_api_key_here"
```
If you use a `.env` file, make sure `load_dotenv()` is called before `agentops.init()`.
## Running Your Agent & Viewing Traces
After adding the two lines and ensuring your API key is set up:
1. Run your agent application as you normally would.
2. AgentOps will automatically instrument supported libraries and send trace data.
3. Visit your [AgentOps Dashboard](https://app.agentops.ai/traces) to observe your agent's operations!
## Beyond Automatic Instrumentation: Decorators
While AgentOps automatically instruments many popular libraries, you can gain finer-grained control and track custom parts of your code using our powerful decorators. This allows you to define specific operations, group logic under named agents, track tool usage with costs, and create custom traces.
### Tracking Custom Operations with `@operation`
Instrument any function in your code to create spans that track its execution, parameters, and return values. These operations will appear in your session visualization alongside LLM calls.
```python theme={null}
from agentops.sdk.decorators import operation
@operation
def process_data(data):
# Your function logic here
processed_result = data.upper()
# agentops.record(Events("Processed Data", result=processed_result)) # Optional: record specific events
return processed_result
# Example usage:
# my_data = "example input"
# output = process_data(my_data)
```
### Tracking Agent Logic with `@agent`
If you structure your system with specific named agents (e.g., classes), use the `@agent` decorator on the class and `@operation` on its methods to group all downstream operations under that agent's context.
```python theme={null}
from agentops.sdk.decorators import agent, operation
@agent(name="MyCustomAgent") # You can provide a name for the agent
class MyAgent:
def __init__(self, agent_id):
self.agent_id = agent_id # agent_id is a reserved parameter for AgentOps
@operation
def perform_task(self, task_description):
# Agent task logic here
# This could include LLM calls or calls to other @operation decorated functions
return f"Agent {self.agent_id} completed: {task_description}"
# Example usage:
# research_agent = MyAgent(agent_id="researcher-001")
# result = research_agent.perform_task("Analyze market trends")
```
### Tracking Tools with `@tool`
Track the usage of specific tools or functions, and optionally associate costs with them. This data will be aggregated in your dashboard.
```python theme={null}
from agentops.sdk.decorators import tool
@tool(name="WebSearchTool", cost=0.05) # Cost is optional
def web_search(query: str) -> str:
# Tool logic here
return f"Search results for: {query}"
@tool # No cost specified
def calculator(expression: str) -> str:
try:
return str(eval(expression))
except Exception as e:
return f"Error: {e}"
# Example usage:
# search_result = web_search("AgentOps features")
# calculation = calculator("2 + 2")
```
### Grouping with Traces (`@trace` or manual)
Create custom traces to group a sequence of operations or define logical units of work. You can use the `@trace` decorator or manage traces manually for more complex scenarios.
If `auto_start_session=False` in `agentops.init()`, you must use `@trace` or `agentops.start_trace()` for any data to be recorded.
```python theme={null}
from agentops.sdk.decorators import trace
# Assuming MyAgent and web_search are defined as above
# Option 1: Using the @trace decorator
@trace(name="MyMainWorkflow", tags=["main-flow"])
def my_workflow_decorated(task_to_perform):
# Your workflow code here
main_agent = MyAgent(agent_id="workflow-agent") # Assuming MyAgent is defined
result = main_agent.perform_task(task_to_perform)
# Example of using a tool within the trace
tool_result = web_search(f"details for {task_to_perform}") # Assuming web_search is defined
return result, tool_result
# result_decorated = my_workflow_decorated("complex data processing")
# Option 2: Managing traces manually
# import agentops # Already imported
# custom_trace = agentops.start_trace(name="MyManualWorkflow", tags=["manual-flow"])
# try:
# # Your code here
# main_agent = MyAgent(agent_id="manual-workflow-agent") # Assuming MyAgent is defined
# result = main_agent.perform_task("another complex task")
# tool_result = web_search(f"info for {result}") # Assuming web_search is defined
# agentops.end_trace(custom_trace, end_state="Success", end_prompt=f"Completed: {result}")
# except Exception as e:
# if custom_trace: # Ensure trace was started before trying to end it
# agentops.end_trace(custom_trace, end_state="Fail", error_message=str(e))
# raise
```
### Updating Trace Metadata
You can also update metadata on running traces to add context or track progress:
```python theme={null}
from agentops import update_trace_metadata
# Update metadata during trace execution
update_trace_metadata({
"operation_name": "AI Agent Processing",
"processing_stage": "data_validation",
"records_processed": 1500,
"user_id": "user_123",
"tags": ["validation", "production"]
})
```
## Complete Example with Decorators
Here's a consolidated example showcasing how these decorators can work together:
```python theme={null}
import agentops
from agentops.sdk.decorators import agent, operation, tool, trace
from dotenv import load_dotenv
import os
# Load environment variables
load_dotenv()
AGENTOPS_API_KEY = os.getenv("AGENTOPS_API_KEY")
# Initialize AgentOps.
# Set auto_start_session=False because @trace will manage the session.
agentops.init(AGENTOPS_API_KEY, auto_start_session=False, tags=["quickstart-complete-example"])
# Define a tool
@tool(name="AdvancedSearch", cost=0.02)
def advanced_web_search(query: str) -> str:
# Simulate a more advanced search
return f"Advanced search results for '{query}': [Details...]"
# Define an agent class
@agent(name="ResearchSpecialistAgent")
class ResearchAgent:
def __init__(self, agent_id: str):
self.agent_id = agent_id # This will be used as the agent_id in AgentOps
@operation(name="ConductResearch")
def conduct_research(self, research_topic: str) -> str:
# Use the tool within the agent's operation
search_results = advanced_web_search(f"Deep dive into {research_topic}")
# Simulate further processing
analysis = f"Analysis of '{research_topic}': Based on '{search_results}', the key findings are..."
return analysis
# Define a workflow using the @trace decorator
@trace(name="FullResearchWorkflow", tags=["research", "analysis", "example"])
def run_full_research_workflow(topic: str) -> str:
specialist_agent = ResearchAgent(agent_id="researcher-alpha-007")
research_findings = specialist_agent.conduct_research(topic)
final_report = f"Research Report for '{topic}':\n{research_findings}"
# agentops.record(Events("ReportGenerated", details=final_report)) # Optional: record a custom event
return final_report
# Execute the workflow
final_output = run_full_research_workflow("AI in healthcare")
print(final_output)
```
## Next Steps
You've seen how to get started with AgentOps! Explore further to leverage its full potential:
See how AgentOps automatically instruments popular LLM and agent frameworks.
Explore detailed examples for various use cases and integrations.
Dive deeper into the AgentOps SDK capabilities and API.
Learn how to group operations and create custom traces using the @trace decorator.
# Backend Setup Guide
Source: https://docs.agentops.ai/v2/self-hosting/backend-setup
Complete guide for setting up and running AgentOps backend services
# Backend Setup Guide
This guide covers how to set up and run the AgentOps backend services from the `/app` directory. The backend includes the API server, dashboard, database services, and observability infrastructure.
## Architecture Overview
The AgentOps backend consists of several interconnected services:
* **API Server** (`api/`) - FastAPI backend with authentication, billing, and data processing
* **Dashboard** (`dashboard/`) - Next.js frontend for visualization and management
* **Supabase** - Authentication and primary PostgreSQL database
* **ClickHouse** - Analytics database for traces and metrics
* **OpenTelemetry Collector** - Observability and trace collection
* **Redis** (optional) - Caching and session storage
## Prerequisites
Before setting up the backend, ensure you have the following installed:
### Required Software
* **Node.js** 18+ ([Download](https://nodejs.org/))
* **Python** 3.12+ ([Download](https://www.python.org/downloads/))
* **Docker & Docker Compose** ([Download](https://www.docker.com/get-started))
* **Bun** (recommended) or npm ([Install Bun](https://bun.sh/))
* **uv** (recommended for Python) ([Install uv](https://github.com/astral-sh/uv))
* **Just** (optional, for convenience commands) ([Install Just](https://github.com/casey/just))
### External Services
You'll need accounts and setup for these external services:
* **Supabase** - Database and authentication ([supabase.com](https://supabase.com))
* **ClickHouse Cloud** - Analytics database ([clickhouse.com/cloud](https://clickhouse.com/cloud))
* **Stripe** (optional) - Payment processing ([stripe.com](https://stripe.com))
## Quick Start
### 1. Clone and Navigate
```bash theme={null}
git clone https://github.com/AgentOps-AI/AgentOps.Next.git
cd AgentOps.Next/app
```
### 2. Environment Setup
Copy and configure environment files:
```bash theme={null}
# Root environment (for Docker Compose)
cp .env.example .env
# API environment
cp api/.env.example api/.env
# Dashboard environment
cp dashboard/.env.example dashboard/.env.local
```
### 3. Install Dependencies
```bash theme={null}
# Using Just (recommended)
just install
# Or manually:
bun install # Root dependencies
uv pip install -r requirements-dev.txt # Python dev tools
cd api && uv pip install -e . && cd .. # API dependencies
cd dashboard && bun install && cd .. # Dashboard dependencies
```
### 4. Configure External Services
Update your `.env` files with your service credentials. See [External Services Configuration](#external-services-configuration) below.
### 5. Start Services
```bash theme={null}
# Option 1: Using Docker Compose (recommended)
docker-compose up -d
# Option 2: Using Just commands
just api-run # Start API server
just fe-run # Start dashboard (in another terminal)
# Option 3: Native development
cd api && uv run python run.py # API server
cd dashboard && bun dev # Dashboard (in another terminal)
```
### 6. Verify Setup
* **Dashboard**: [http://localhost:3000](http://localhost:3000)
* **API Documentation**: [http://localhost:8000/redoc](http://localhost:8000/redoc)
* **API Health Check**: [http://localhost:8000/health](http://localhost:8000/health)
## External Services Configuration
### Supabase Setup
1. Create a new project at [supabase.com](https://supabase.com)
2. Go to Settings → API to get your keys
3. Run the database migrations:
```bash theme={null}
cd supabase
npx supabase db push
```
4. Update your `.env` files with:
```env theme={null}
NEXT_PUBLIC_SUPABASE_URL=https://your-project-id.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=your-anon-key
SUPABASE_SERVICE_ROLE_KEY=your-service-role-key
SUPABASE_PROJECT_ID=your-project-id
```
### ClickHouse Setup
1. Sign up for [ClickHouse Cloud](https://clickhouse.com/cloud) or self-host
2. Create a database and get connection details
3. Run the ClickHouse migrations:
```bash theme={null}
# Apply schema from clickhouse/schema_dump.sql
```
4. Update your `.env` files with:
```env theme={null}
CLICKHOUSE_HOST=your-host.clickhouse.cloud
CLICKHOUSE_PORT=8123
CLICKHOUSE_USER=default
CLICKHOUSE_PASSWORD=your-password
CLICKHOUSE_DATABASE=your-database
CLICKHOUSE_SECURE=true
```
### Stripe Setup (Optional)
For billing functionality:
1. Create a [Stripe](https://stripe.com) account
2. Get your API keys from the dashboard
3. Update your `.env` files with:
```env theme={null}
NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY=pk_test_...
STRIPE_SECRET_KEY=sk_test_...
STRIPE_WEBHOOK_SECRET=whsec_...
```
### Additional Services (Optional)
* **Sentry**: Error monitoring
```env theme={null}
SENTRY_DSN=https://your-dsn@sentry.io/project-id
SENTRY_ENVIRONMENT=development
```
* **PostHog**: Analytics
```env theme={null}
NEXT_PUBLIC_POSTHOG_KEY=phc_your-key
NEXT_PUBLIC_POSTHOG_HOST=https://app.posthog.com
```
## Development Workflow
### Using Just Commands (Recommended)
The `justfile` provides convenient commands for development:
```bash theme={null}
# Setup and installation
just setup # Complete development setup
just install # Install all dependencies
# API Development
just api-native # Run API natively (fastest)
just api-build # Build API Docker image
just api-run # Run API in Docker
just api-test # Run API tests
# Frontend Development
just fe-run # Run dashboard development server
just fe-build # Build dashboard for production
just fe-test # Run frontend tests
# Code Quality
just lint # Run all linting checks
just format # Format all code
just test # Run all tests
# Docker Management
just up # Start all services
just down # Stop all services
just logs # View service logs
just clean # Clean up Docker resources
```
### Manual Development
If you prefer running services manually:
```bash theme={null}
# Start API server
cd api && uv run python run.py
# Start dashboard (in another terminal)
cd dashboard && bun dev
# Start landing page (in another terminal)
cd landing && bun dev
```
## Service Configuration
### API Server Configuration
Key environment variables for the API server (`api/.env`):
```env theme={null}
# Database connections
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_KEY=your-service-role-key
CLICKHOUSE_HOST=your-clickhouse-host
# Application settings
APP_URL=http://localhost:3000
LOGGING_LEVEL=INFO
JWT_SECRET_KEY=your-jwt-secret
# External integrations
SENTRY_DSN=your-sentry-dsn
```
### Dashboard Configuration
Key environment variables for the dashboard (`dashboard/.env.local`):
```env theme={null}
# Supabase
NEXT_PUBLIC_SUPABASE_URL=https://your-project.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=your-anon-key
# Application URLs
NEXT_PUBLIC_APP_URL=http://localhost:8000
NEXT_PUBLIC_SITE_URL=http://localhost:3000
# Features
NEXT_PUBLIC_ENVIRONMENT_TYPE=development
NEXT_PUBLIC_PLAYGROUND=true
```
## Troubleshooting
### Common Issues
**Port conflicts:**
```bash theme={null}
# Check what's running on ports 3000 and 8000
lsof -i :3000
lsof -i :8000
```
**Database connection issues:**
* Verify your Supabase and ClickHouse credentials
* Check network connectivity to external services
* Ensure database migrations have been applied
**Docker issues:**
```bash theme={null}
# Reset Docker environment
just clean
docker system prune -f
just up
```
**Dependency issues:**
```bash theme={null}
# Clean and reinstall
rm -rf node_modules api/.venv dashboard/node_modules
just install
```
### Logs and Debugging
```bash theme={null}
# View service logs
just logs
# View specific service logs
docker-compose logs api
docker-compose logs dashboard
# Run with debug logging
LOGGING_LEVEL=DEBUG just api-run
```
## Next Steps
Once your backend is running:
1. **Create an account** at [http://localhost:3000](http://localhost:3000)
2. **Generate an API key** in the dashboard
3. **Install the AgentOps SDK** and start tracking your AI agents
4. **Explore the dashboard** to view traces and analytics
For production deployment, see our [Deployment Guide](/v2/self-hosting/deployment).
# Docker Guide
Source: https://docs.agentops.ai/v2/self-hosting/docker-guide
Complete guide for running AgentOps with Docker and Docker Compose
# Docker Guide
This guide covers how to run AgentOps backend services using Docker and Docker Compose. This is the recommended approach for both development and production deployments.
## Overview
The AgentOps Docker setup includes:
* **API Server** - FastAPI backend service
* **Dashboard** - Next.js frontend application
* **OpenTelemetry Collector** - Observability and trace collection
* **External Services** - Supabase, ClickHouse (configured separately)
## Docker Compose Configuration
The main `compose.yaml` file in the `/app` directory defines the service architecture:
```yaml theme={null}
services:
api:
build:
context: ./api
dockerfile: Dockerfile
ports:
- '8000:8000'
environment:
# Database connections
SUPABASE_URL: ${NEXT_PUBLIC_SUPABASE_URL}
SUPABASE_KEY: ${SUPABASE_SERVICE_ROLE_KEY}
CLICKHOUSE_HOST: ${CLICKHOUSE_HOST}
# ... other environment variables
network_mode: 'host'
volumes:
- ./api:/app/api
dashboard:
profiles: ['dashboard']
build:
context: ./dashboard
dockerfile: Dockerfile
ports:
- '3000:3000'
environment:
# Frontend configuration
NEXT_PUBLIC_SUPABASE_URL: ${NEXT_PUBLIC_SUPABASE_URL}
NEXT_PUBLIC_SUPABASE_ANON_KEY: ${NEXT_PUBLIC_SUPABASE_ANON_KEY}
# ... other environment variables
network_mode: 'host'
depends_on:
- api
volumes:
- ./dashboard:/app/
```
## Quick Start with Docker
### 1. Prerequisites
* Docker Engine 20.10+
* Docker Compose 2.0+
* Git
### 2. Clone and Setup
```bash theme={null}
git clone https://github.com/AgentOps-AI/AgentOps.Next.git
cd AgentOps.Next/app
# Copy environment files
cp .env.example .env
cp api/.env.example api/.env
cp dashboard/.env.example dashboard/.env.local
```
### 3. Configure Environment Variables
Update your `.env` files with your external service credentials:
```env theme={null}
# .env (root)
NEXT_PUBLIC_SUPABASE_URL=https://your-project.supabase.co
SUPABASE_SERVICE_ROLE_KEY=your-service-role-key
CLICKHOUSE_HOST=your-clickhouse-host
CLICKHOUSE_PASSWORD=your-password
# ... other variables
```
### 4. Start Services
```bash theme={null}
# Start all services
docker-compose up -d
# Or start with dashboard profile
docker-compose --profile dashboard up -d
# View logs
docker-compose logs -f
```
### 5. Verify Services
* **API Health**: [http://localhost:8000/health](http://localhost:8000/health)
* **API Docs**: [http://localhost:8000/redoc](http://localhost:8000/redoc)
* **Dashboard**: [http://localhost:3000](http://localhost:3000)
## Docker Commands Reference
### Basic Operations
```bash theme={null}
# Start all services in detached mode
docker-compose up -d
# Start services with dashboard
docker-compose --profile dashboard up -d
# Stop all services
docker-compose down
# Stop and remove volumes
docker-compose down -v
# View service status
docker-compose ps
# View logs for all services
docker-compose logs -f
# View logs for specific service
docker-compose logs -f api
docker-compose logs -f dashboard
```
### Development Commands
```bash theme={null}
# Rebuild services after code changes
docker-compose build
# Rebuild specific service
docker-compose build api
docker-compose build dashboard
# Force recreate containers
docker-compose up -d --force-recreate
# Scale services (if needed)
docker-compose up -d --scale api=2
```
### Debugging Commands
```bash theme={null}
# Execute commands in running containers
docker-compose exec api bash
docker-compose exec dashboard sh
# View container resource usage
docker stats
# Inspect service configuration
docker-compose config
# View service networks
docker network ls
docker network inspect app_default
```
## Using Just Commands
The project includes a `justfile` with convenient Docker commands:
```bash theme={null}
# Start all services
just up
# Stop all services
just down
# View logs
just logs
# Clean up Docker resources
just clean
# Build and run API
just api-build
just api-run
```
## Service-Specific Configuration
### API Service
The API service runs a FastAPI application with the following configuration:
**Dockerfile highlights:**
```dockerfile theme={null}
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["python", "run.py"]
```
**Key environment variables:**
* `SUPABASE_URL`, `SUPABASE_KEY` - Database connection
* `CLICKHOUSE_HOST`, `CLICKHOUSE_PASSWORD` - Analytics database
* `LOGGING_LEVEL` - Log verbosity (DEBUG, INFO, WARNING, ERROR)
* `SENTRY_DSN` - Error tracking
### Dashboard Service
The Dashboard service runs a Next.js application:
**Dockerfile highlights:**
```dockerfile theme={null}
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]
```
**Key environment variables:**
* `NEXT_PUBLIC_SUPABASE_URL`, `NEXT_PUBLIC_SUPABASE_ANON_KEY` - Frontend auth
* `NEXT_PUBLIC_APP_URL` - API server URL
* `NEXT_PUBLIC_ENVIRONMENT_TYPE` - Environment (development/production)
## OpenTelemetry Collector
The OpenTelemetry Collector is included via a separate compose file:
```yaml theme={null}
# opentelemetry-collector/compose.yaml
services:
otel-collector:
image: otel/opentelemetry-collector-contrib:latest
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./config/otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
- "8889:8889" # Prometheus metrics
```
## Production Configuration
### Environment Variables for Production
```env theme={null}
# Security
DEBUG=false
LOGGING_LEVEL=WARNING
JWT_SECRET_KEY=your-secure-jwt-secret
# URLs
PROTOCOL=https
API_DOMAIN=api.yourdomain.com
APP_DOMAIN=yourdomain.com
# Database
CLICKHOUSE_SECURE=true
SUPABASE_URL=https://your-prod-project.supabase.co
# Monitoring
SENTRY_ENVIRONMENT=production
NEXT_PUBLIC_ENVIRONMENT_TYPE=production
```
### Production Docker Compose
For production, you may want to:
1. **Use specific image tags** instead of building locally
2. **Configure resource limits**
3. **Set up health checks**
4. **Use external networks**
Example production overrides (`compose.prod.yaml`):
```yaml theme={null}
services:
api:
image: agentops/api:v1.0.0
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
restart: unless-stopped
dashboard:
image: agentops/dashboard:v1.0.0
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
restart: unless-stopped
```
Run with production config:
```bash theme={null}
docker-compose -f compose.yaml -f compose.prod.yaml up -d
```
## Troubleshooting
### Common Issues
**Services won't start:**
```bash theme={null}
# Check logs for errors
docker-compose logs api
docker-compose logs dashboard
# Verify environment variables
docker-compose config
```
**Port conflicts:**
```bash theme={null}
# Check what's using ports
lsof -i :3000
lsof -i :8000
# Use different ports
docker-compose up -d -p 3001:3000 -p 8001:8000
```
**Database connection issues:**
* Verify external service credentials in `.env` files
* Check network connectivity from containers
* Ensure services are accessible from Docker network
**Build failures:**
```bash theme={null}
# Clean build cache
docker system prune -f
docker-compose build --no-cache
# Check Dockerfile syntax
docker-compose config
```
### Performance Optimization
**Resource monitoring:**
```bash theme={null}
# Monitor container resources
docker stats
# View container processes
docker-compose exec api top
```
**Volume optimization:**
```bash theme={null}
# Use named volumes for better performance
volumes:
- api_data:/app/data
- dashboard_cache:/app/.next
```
**Network optimization:**
```bash theme={null}
# Create custom network for better isolation
networks:
agentops:
driver: bridge
```
## Maintenance
### Regular Maintenance Tasks
```bash theme={null}
# Update images
docker-compose pull
docker-compose up -d
# Clean up unused resources
docker system prune -f
# Backup volumes
docker run --rm -v app_api_data:/data -v $(pwd):/backup alpine tar czf /backup/api_data.tar.gz -C /data .
# View disk usage
docker system df
```
### Monitoring
```bash theme={null}
# Service health checks
curl http://localhost:8000/health
curl http://localhost:3000/api/health
# Container logs
docker-compose logs --tail=100 -f api
# Resource usage
docker stats --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}"
```
## Next Steps
* Set up [monitoring and observability](/v2/self-hosting/monitoring)
* Configure [production deployment](/v2/self-hosting/deployment)
* Set up [backup and recovery](/v2/self-hosting/backup)
* Configure [SSL/TLS certificates](/v2/self-hosting/ssl)
# Just Commands Reference
Source: https://docs.agentops.ai/v2/self-hosting/just-commands
Complete reference for all available Just commands in AgentOps development
# Just Commands Reference
[Just](https://github.com/casey/just) is a command runner that provides convenient shortcuts for common development tasks. The AgentOps project includes a comprehensive `justfile` with commands for setup, development, testing, and deployment.
## Installation
First, install Just if you haven't already:
```bash theme={null}
# macOS
brew install just
# Linux (using cargo)
cargo install just
# Or download from GitHub releases
curl --proto '=https' --tlsv1.2 -sSf https://just.systems/install.sh | bash -s -- --to ~/bin
```
## Quick Reference
View all available commands:
```bash theme={null}
just
# or
just --list
```
## Setup Commands
### `just setup`
Complete development environment setup - runs the full initialization process.
```bash theme={null}
just setup
```
**What it does:**
* Copies environment files (`.env.example` → `.env`, etc.)
* Installs all dependencies (root, API, dashboard)
* Sets up development environment
**Use when:**
* First time setting up the project
* After a fresh clone
* When you want to reset your development environment
### `just install`
Install all project dependencies across all services.
```bash theme={null}
just install
```
**What it does:**
* Installs root Node.js dependencies (`bun install`)
* Installs Python development dependencies (`uv pip install -r requirements-dev.txt`)
* Installs API dependencies (`cd api && uv pip install -e .`)
* Installs dashboard dependencies (`cd dashboard && bun install`)
**Use when:**
* After pulling changes that modify dependencies
* When `package.json`, `pyproject.toml`, or requirements files change
## API Development Commands
### `just api-native`
Run the API server natively (fastest for development).
```bash theme={null}
just api-native
```
**What it does:**
* Starts the FastAPI server using `uv run python run.py`
* Runs on `http://localhost:8000`
* Provides fastest reload times for development
**Use when:**
* Active API development
* You need fastest iteration cycles
* Debugging API code
### `just api-build`
Build the API Docker image.
```bash theme={null}
just api-build
# With Stripe support
just api-build stripe
```
**What it does:**
* Builds Docker image for the API service
* Uses `./scripts/just-api-build.sh`
* Optional Stripe integration support
**Use when:**
* Preparing for Docker-based deployment
* Testing Docker build process
* Before running `just api-run`
### `just api-run`
Run the API server in a Docker container.
```bash theme={null}
just api-run
# With Stripe support
just api-run stripe
```
**What it does:**
* Runs the API service using Docker
* Uses `./scripts/just-api-run.sh`
* Includes all necessary environment variables
* Optional Stripe integration
**Use when:**
* Testing Docker deployment locally
* You need isolated environment
* Production-like testing
### `just api-test`
Run API tests using pytest.
```bash theme={null}
just api-test
```
**What it does:**
* Changes to `api/` directory
* Runs `pytest` with all configured tests
* Includes unit and integration tests
**Use when:**
* Before committing API changes
* Validating API functionality
* Continuous integration
## Frontend Development Commands
### `just fe-run`
Run the dashboard development server.
```bash theme={null}
just fe-run
```
**What it does:**
* Changes to `dashboard/` directory
* Installs dependencies (`bun install`)
* Starts development server (`bun run dev`)
* Available at `http://localhost:3000`
**Use when:**
* Active dashboard development
* Testing frontend changes
* Full-stack development
### `just fe-build`
Build the dashboard for production.
```bash theme={null}
just fe-build
```
**What it does:**
* Changes to `dashboard/` directory
* Builds optimized production bundle (`bun run build`)
* Generates static assets
**Use when:**
* Preparing for production deployment
* Testing production build locally
* Performance optimization
### `just fe-test`
Run frontend tests.
```bash theme={null}
just fe-test
```
**What it does:**
* Changes to `dashboard/` directory
* Runs test suite (`bun test`)
* Includes unit and component tests
**Use when:**
* Before committing frontend changes
* Validating UI functionality
* Continuous integration
## Code Quality Commands
### `just lint`
Run all linting checks across the project.
```bash theme={null}
just lint
```
**What it does:**
* Runs `bun run lint` from project root
* Checks JavaScript/TypeScript files with ESLint
* Checks Python files with Ruff
* Validates code style and quality
**Use when:**
* Before committing changes
* Code review preparation
* Maintaining code quality
### `just format`
Format all code using project standards.
```bash theme={null}
just format
```
**What it does:**
* Runs `ruff format` for Python files
* Applies consistent code formatting
* Fixes automatically correctable issues
**Use when:**
* Before committing changes
* Standardizing code style
* Preparing for code review
### `just test`
Run all tests across the project.
```bash theme={null}
just test
```
**What it does:**
* Runs `just api-test` (API tests)
* Runs `just fe-test` (frontend tests)
* Comprehensive test suite execution
**Use when:**
* Before major releases
* Validating entire system
* Continuous integration
## Docker Management Commands
### `just up`
Start all services with Docker Compose.
```bash theme={null}
just up
```
**What it does:**
* Runs `docker-compose up -d`
* Starts all defined services in detached mode
* Creates networks and volumes as needed
**Use when:**
* Starting development environment
* Testing full system integration
* Docker-based development
### `just down`
Stop all Docker services.
```bash theme={null}
just down
```
**What it does:**
* Runs `docker-compose down`
* Stops and removes containers
* Preserves volumes and networks
**Use when:**
* Stopping development environment
* Switching between development modes
* Cleaning up running services
### `just logs`
View Docker logs for all services.
```bash theme={null}
just logs
```
**What it does:**
* Runs `docker-compose logs -f`
* Shows real-time logs from all services
* Useful for debugging and monitoring
**Use when:**
* Debugging service issues
* Monitoring application behavior
* Troubleshooting problems
### `just clean`
Clean up Docker resources.
```bash theme={null}
just clean
```
**What it does:**
* Runs `docker-compose down -v` (stops services and removes volumes)
* Runs `docker system prune -f` (removes unused Docker resources)
* Frees up disk space
**Use when:**
* Cleaning up development environment
* Freeing disk space
* Resolving Docker issues
* Fresh start needed
## Command Combinations and Workflows
### Full Development Setup
```bash theme={null}
# First time setup
just setup
# Start development
just api-native # Terminal 1
just fe-run # Terminal 2
```
### Docker Development
```bash theme={null}
# Setup and start with Docker
just setup
just up
just logs
```
### Testing Workflow
```bash theme={null}
# Run quality checks before committing
just lint
just format
just test
```
### Production Preparation
```bash theme={null}
# Build and test production assets
just api-build
just fe-build
just test
```
## Environment-Specific Usage
### Development Environment
```bash theme={null}
# Fast iteration cycle
just api-native # Native API for speed
just fe-run # Frontend dev server
```
### Integration Testing
```bash theme={null}
# Full Docker environment
just up # All services in Docker
just test # Run full test suite
```
### Production Testing
```bash theme={null}
# Production-like environment
just api-build # Build production API image
just fe-build # Build production frontend
just up # Run with Docker
```
## Troubleshooting Just Commands
### Command Not Found
```bash theme={null}
# Verify Just is installed
just --version
# Install if missing
brew install just # macOS
```
### Permission Issues
```bash theme={null}
# Make sure justfile is executable
chmod +x justfile
# Check file permissions
ls -la justfile
```
### Environment Issues
```bash theme={null}
# Verify environment files exist
ls -la .env api/.env dashboard/.env.local
# Run setup to create missing files
just setup
```
### Docker Issues
```bash theme={null}
# Clean Docker environment
just clean
# Restart Docker service
sudo systemctl restart docker # Linux
```
## Custom Commands and Extensions
You can extend the `justfile` with your own commands. Add to the bottom of the file:
```bash theme={null}
# Custom command example
my-command:
@echo "Running my custom command"
# Your commands here
# Command with parameters
deploy env:
@echo "Deploying to {{env}}"
# Deployment commands
```
Use custom commands:
```bash theme={null}
just my-command
just deploy staging
```
## Best Practices
### Daily Development
1. `just api-native` for API development (fastest)
2. `just fe-run` for frontend development
3. `just lint && just test` before commits
### Integration Testing
1. `just up` for full environment
2. `just logs` for monitoring
3. `just clean` when issues arise
### Production Preparation
1. `just api-build && just fe-build` for production builds
2. `just test` for validation
3. `just up` for final testing
## Related Documentation
* [Backend Setup Guide](/v2/self-hosting/backend-setup) - Complete setup instructions
* [Docker Guide](/v2/self-hosting/docker-guide) - Docker-specific commands
* [Development Workflow](/v2/self-hosting/development) - Development best practices
# Native Development Guide
Source: https://docs.agentops.ai/v2/self-hosting/native-development
Complete guide for running AgentOps backend services natively without Docker
# Native Development Guide
This guide covers how to run AgentOps backend services natively on your local machine without Docker. Native development provides the fastest iteration cycles and is ideal for active development work.
## Overview
Running natively means:
* **Faster startup times** - No container overhead
* **Direct file system access** - Immediate code changes
* **Native debugging** - Use your preferred IDE debugger
* **Resource efficiency** - Lower memory and CPU usage
## Prerequisites
### System Requirements
* **Python 3.12+** with pip or uv
* **Node.js 18+** with npm, yarn, or bun
* **Git** for version control
* **Just** (optional) for convenience commands
### External Services
You'll need these external services configured:
* **Supabase** - Database and authentication
* **ClickHouse** - Analytics database
* **Stripe** (optional) - Payment processing
## Quick Start
### 1. Clone and Setup
```bash theme={null}
git clone https://github.com/AgentOps-AI/AgentOps.Next.git
cd AgentOps.Next/app
# Copy environment files
cp .env.example .env
cp api/.env.example api/.env
cp dashboard/.env.example dashboard/.env.local
```
### 2. Install Dependencies
#### Root Dependencies
```bash theme={null}
# Install shared tools (linting, formatting)
bun install
# Install Python development tools
uv pip install -r requirements-dev.txt
```
#### API Dependencies
```bash theme={null}
cd api
# Using uv (recommended)
uv pip install -e .
# Or using pip
pip install -e .
cd ..
```
#### Dashboard Dependencies
```bash theme={null}
cd dashboard
# Using bun (recommended)
bun install
# Or using npm
npm install
cd ..
```
### 3. Configure Environment Variables
Update your environment files with your service credentials. See [External Services Setup](#external-services-setup) below.
### 4. Start Services
```bash theme={null}
# Terminal 1: API Server
cd api && uv run python run.py
# Terminal 2: Dashboard (in a new terminal)
cd dashboard && bun dev
# Terminal 3: Landing Page (optional, in a new terminal)
cd landing && bun dev
```
### 5. Verify Setup
* **API Health**: [http://localhost:8000/health](http://localhost:8000/health)
* **API Documentation**: [http://localhost:8000/redoc](http://localhost:8000/redoc)
* **Dashboard**: [http://localhost:3000](http://localhost:3000)
* **Landing Page**: [http://localhost:3001](http://localhost:3001)
## External Services Setup
### Supabase Configuration
1. Create a new project at [supabase.com](https://supabase.com)
2. Get your project credentials from Settings → API
3. Set up the database schema:
```bash theme={null}
cd supabase
npx supabase db push
```
4. Update `api/.env` and `dashboard/.env.local`:
```env theme={null}
# API environment
SUPABASE_URL=https://your-project-id.supabase.co
SUPABASE_KEY=your-service-role-key
# Dashboard environment
NEXT_PUBLIC_SUPABASE_URL=https://your-project-id.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=your-anon-key
```
### ClickHouse Configuration
1. Sign up for [ClickHouse Cloud](https://clickhouse.com/cloud) or self-host
2. Create a database and get connection details
3. Apply the schema:
```bash theme={null}
# Use the schema from clickhouse/schema_dump.sql
clickhouse-client --host your-host --query "$(cat clickhouse/schema_dump.sql)"
```
4. Update `api/.env`:
```env theme={null}
CLICKHOUSE_HOST=your-host.clickhouse.cloud
CLICKHOUSE_PORT=8123
CLICKHOUSE_USER=default
CLICKHOUSE_PASSWORD=your-password
CLICKHOUSE_DATABASE=your-database
CLICKHOUSE_SECURE=true
```
## API Server Setup
### Environment Configuration
Key variables in `api/.env`:
```env theme={null}
# Database Connections
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_KEY=your-service-role-key
CLICKHOUSE_HOST=your-clickhouse-host
CLICKHOUSE_PASSWORD=your-password
# Application Settings
APP_URL=http://localhost:3000
LOGGING_LEVEL=INFO
JWT_SECRET_KEY=your-jwt-secret-key
# Optional Integrations
SENTRY_DSN=your-sentry-dsn
SENTRY_ENVIRONMENT=development
```
### Running the API Server
#### Using Just (Recommended)
```bash theme={null}
just api-native
```
#### Manual Command
```bash theme={null}
cd api
uv run python run.py
```
#### Alternative Methods
```bash theme={null}
# Using pip and python directly
cd api
pip install -e .
python run.py
# Using uvicorn directly
cd api
uvicorn agentops.main:app --host 0.0.0.0 --port 8000 --reload
```
### API Development Features
* **Auto-reload** on file changes
* **Interactive API docs** at [http://localhost:8000/docs](http://localhost:8000/docs)
* **ReDoc documentation** at [http://localhost:8000/redoc](http://localhost:8000/redoc)
* **Health check** at [http://localhost:8000/health](http://localhost:8000/health)
## Dashboard Setup
### Environment Configuration
Key variables in `dashboard/.env.local`:
```env theme={null}
# Supabase Configuration
NEXT_PUBLIC_SUPABASE_URL=https://your-project.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=your-anon-key
SUPABASE_SERVICE_ROLE_KEY=your-service-role-key
# Application URLs
NEXT_PUBLIC_APP_URL=http://localhost:8000
NEXT_PUBLIC_SITE_URL=http://localhost:3000
# Feature Flags
NEXT_PUBLIC_ENVIRONMENT_TYPE=development
NEXT_PUBLIC_PLAYGROUND=true
# Optional Services
NEXT_PUBLIC_POSTHOG_KEY=your-posthog-key
NEXT_PUBLIC_SENTRY_DSN=your-sentry-dsn
```
### Running the Dashboard
#### Using Just (Recommended)
```bash theme={null}
just fe-run
```
#### Manual Commands
```bash theme={null}
cd dashboard
# Using bun
bun install
bun dev
# Using npm
npm install
npm run dev
# Using yarn
yarn install
yarn dev
```
### Dashboard Development Features
* **Hot reload** on file changes
* **Fast Refresh** for React components
* **Development tools** integration
* **Source maps** for debugging
## Development Workflow
### Daily Development Routine
1. **Start services**:
```bash theme={null}
# Terminal 1
just api-native
# Terminal 2
just fe-run
```
2. **Make changes** to your code
3. **Test changes** - services auto-reload
4. **Run tests** before committing:
```bash theme={null}
just test
```
### Code Quality Workflow
```bash theme={null}
# Format code
just format
# Run linting
just lint
# Run tests
just test
# All-in-one quality check
just format && just lint && just test
```
### Database Development
```bash theme={null}
# Apply Supabase migrations
cd supabase
npx supabase db push
# Reset database (development only)
npx supabase db reset
# Generate TypeScript types
npx supabase gen types typescript --local > types/database.types.ts
```
## Testing
### API Testing
```bash theme={null}
cd api
# Run all tests
pytest
# Run with coverage
pytest --cov=agentops
# Run specific test file
pytest tests/test_auth.py
# Run with verbose output
pytest -v
```
### Dashboard Testing
```bash theme={null}
cd dashboard
# Run all tests
bun test
# Run tests in watch mode
bun test --watch
# Run tests with coverage
bun test --coverage
```
### Integration Testing
```bash theme={null}
# Run full test suite
just test
# Test API and dashboard separately
just api-test
just fe-test
```
## Debugging
### API Debugging
1. **Set breakpoints** in your IDE
2. **Run with debugger**:
```bash theme={null}
cd api
python -m debugpy --listen 5678 --wait-for-client run.py
```
3. **Attach your IDE debugger** to port 5678
### Dashboard Debugging
1. **Use browser dev tools** (F12)
2. **Next.js debugging**:
```bash theme={null}
cd dashboard
NODE_OPTIONS='--inspect' bun dev
```
3. **Attach debugger** at chrome://inspect
### Log Debugging
```bash theme={null}
# API logs with debug level
cd api
LOGGING_LEVEL=DEBUG uv run python run.py
# Dashboard logs
cd dashboard
DEBUG=* bun dev
```
## Performance Optimization
### API Performance
* **Use native Python** for fastest development
* **Enable hot reload** with uvicorn
* **Profile with py-spy**:
```bash theme={null}
pip install py-spy
py-spy top --pid $(pgrep -f "python run.py")
```
### Dashboard Performance
* **Use bun** for faster package management
* **Enable Fast Refresh** (enabled by default)
* **Analyze bundle size**:
```bash theme={null}
cd dashboard
ANALYZE=true bun run build
```
## Troubleshooting
### Common Issues
**Python import errors:**
```bash theme={null}
# Reinstall in editable mode
cd api
uv pip install -e .
```
**Node.js module not found:**
```bash theme={null}
# Clear and reinstall
cd dashboard
rm -rf node_modules package-lock.json
bun install
```
**Port already in use:**
```bash theme={null}
# Find and kill process
lsof -i :8000 # API port
lsof -i :3000 # Dashboard port
kill -9
```
**Database connection issues:**
* Verify credentials in `.env` files
* Check network connectivity
* Ensure external services are running
### Performance Issues
**Slow API startup:**
```bash theme={null}
# Use uv for faster Python package management
uv pip install -e .
```
**Slow dashboard reload:**
```bash theme={null}
# Use bun instead of npm
cd dashboard
rm -rf node_modules
bun install
```
### Development Environment Reset
```bash theme={null}
# Clean everything and start fresh
rm -rf api/.venv dashboard/node_modules node_modules
just setup
```
## IDE Configuration
### VS Code
Recommended extensions:
* Python
* Pylance
* ES7+ React/Redux/React-Native snippets
* Tailwind CSS IntelliSense
* Prettier - Code formatter
Settings (`.vscode/settings.json`):
```json theme={null}
{
"python.defaultInterpreterPath": "./api/.venv/bin/python",
"python.linting.enabled": true,
"python.linting.ruffEnabled": true,
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.fixAll.eslint": true
}
}
```
### PyCharm
1. **Set Python interpreter** to `./api/.venv/bin/python`
2. **Enable Ruff** for Python linting
3. **Configure Node.js** interpreter for dashboard
4. **Set up run configurations** for API and dashboard
## Advanced Configuration
### Custom Environment Variables
Add custom variables to your `.env` files:
```env theme={null}
# Custom API settings
CUSTOM_FEATURE_FLAG=true
DEBUG_SQL_QUERIES=false
# Custom dashboard settings
NEXT_PUBLIC_CUSTOM_FEATURE=enabled
```
### Development Proxy
Set up a proxy for API calls in development:
```javascript theme={null}
// dashboard/next.config.js
module.exports = {
async rewrites() {
return [
{
source: '/api/:path*',
destination: 'http://localhost:8000/:path*',
},
]
},
}
```
### Hot Reload Configuration
Fine-tune hot reload behavior:
```python theme={null}
# api/run.py
if __name__ == "__main__":
import uvicorn
uvicorn.run(
"agentops.main:app",
host="0.0.0.0",
port=8000,
reload=True,
reload_dirs=["agentops"], # Only watch specific directories
reload_excludes=["*.pyc", "*.log"], # Exclude certain files
)
```
## Next Steps
Once your native development environment is running:
1. **Explore the codebase** - Start with `api/agentops/main.py` and `dashboard/pages/index.tsx`
2. **Make your first changes** - Try modifying a simple component or API endpoint
3. **Set up testing** - Write tests for your changes
4. **Configure your IDE** - Set up debugging and linting
5. **Join the community** - Connect with other developers
For production deployment, see our [Deployment Guide](/v2/self-hosting/deployment).
# Self-Hosting Overview
Source: https://docs.agentops.ai/v2/self-hosting/overview
Complete guide to self-hosting AgentOps backend services and infrastructure
# Self-Hosting AgentOps
Welcome to the AgentOps self-hosting documentation. This section provides comprehensive guides for running AgentOps backend services and infrastructure on your own infrastructure.
## What is Self-Hosting?
Self-hosting AgentOps means running the entire AgentOps platform on your own servers or cloud infrastructure, giving you complete control over:
* **Data sovereignty** - Your data stays on your infrastructure
* **Customization** - Modify the platform to fit your needs
* **Security** - Implement your own security policies
* **Compliance** - Meet specific regulatory requirements
* **Cost control** - Manage infrastructure costs directly
## Architecture Overview
The AgentOps platform consists of several key components:
### Core Services
* **API Server** - FastAPI backend handling authentication, data processing, and business logic
* **Dashboard** - Next.js frontend providing the user interface and visualization
* **Database Layer** - Supabase (PostgreSQL) for primary data and ClickHouse for analytics
### Supporting Infrastructure
* **OpenTelemetry Collector** - Trace and metrics collection
* **Authentication** - Supabase Auth for user management
* **File Storage** - Supabase Storage for file handling
* **Monitoring** - Optional Sentry, PostHog integration
### External Dependencies
* **Supabase** - Database and authentication services
* **ClickHouse** - Analytics database for traces and metrics
* **Stripe** (optional) - Payment processing for billing features
## Deployment Options
### 1. Development Setup
Perfect for local development and testing:
* **Native Development** - Run services directly on your machine
* **Docker Development** - Use Docker Compose for isolated environment
* **Hybrid Approach** - Mix native and containerized services
### 2. Production Deployment
For production workloads:
* **Docker Compose** - Simple single-server deployment
* **Kubernetes** - Scalable container orchestration
* **Cloud Platforms** - Deploy to AWS, GCP, Azure
* **Serverless** - Use cloud functions and managed services
## Quick Start Guide
### Prerequisites
Before you begin, ensure you have:
* Modern operating system (Linux, macOS, or Windows with WSL)
* Docker and Docker Compose
* Node.js 18+ and Python 3.12+
* Access to external services (Supabase, ClickHouse)
### 5-Minute Setup
```bash theme={null}
# 1. Clone the repository
git clone https://github.com/AgentOps-AI/AgentOps.Next.git
cd AgentOps.Next/app
# 2. Copy environment files
cp .env.example .env
cp api/.env.example api/.env
cp dashboard/.env.example dashboard/.env.local
# 3. Configure external services (see guides below)
# Edit .env files with your service credentials
# 4. Start with Docker
docker-compose up -d
# Or start natively for development
just api-native # Terminal 1
just fe-run # Terminal 2
```
Visit [http://localhost:3000](http://localhost:3000) to access the dashboard!
## Documentation Sections
### [Backend Setup Guide](/v2/self-hosting/backend-setup)
Complete guide for setting up the AgentOps backend services, including:
* Prerequisites and system requirements
* Environment configuration
* External service setup
* Service verification and troubleshooting
### [Docker Guide](/v2/self-hosting/docker-guide)
Comprehensive Docker and Docker Compose documentation:
* Docker configuration and setup
* Container management commands
* Production Docker configurations
* Performance optimization and monitoring
### [Native Development Guide](/v2/self-hosting/native-development)
Guide for running services natively without Docker:
* Fastest development setup
* IDE configuration and debugging
* Performance optimization
* Advanced development workflows
### [Just Commands Reference](/v2/self-hosting/just-commands)
Complete reference for all available Just commands:
* Setup and installation commands
* Development workflow commands
* Testing and quality assurance
* Docker management utilities
## Choosing Your Setup
### For Active Development
**Recommended: Native Development**
* Fastest iteration cycles
* Direct debugging capabilities
* Lower resource usage
* Immediate file system access
```bash theme={null}
just api-native # Start API natively
just fe-run # Start dashboard
```
### For Team Development
**Recommended: Docker Development**
* Consistent environment across team
* Isolated service dependencies
* Easy environment reset
* Production-like testing
```bash theme={null}
just up # Start all services
just logs # Monitor services
```
### For Production
**Recommended: Docker with External Services**
* Scalable and maintainable
* Health checks and restart policies
* Resource limits and monitoring
* Easy backup and recovery
## External Services Setup
### Required Services
#### Supabase (Database & Auth)
* **Purpose**: Primary PostgreSQL database and user authentication
* **Setup**: Create project at [supabase.com](https://supabase.com)
* **Cost**: Free tier available, pay-as-you-scale
* **Alternatives**: Self-hosted PostgreSQL + custom auth
#### ClickHouse (Analytics)
* **Purpose**: High-performance analytics database for traces and metrics
* **Setup**: ClickHouse Cloud or self-hosted
* **Cost**: Pay-per-usage or fixed instances
* **Alternatives**: PostgreSQL (with performance trade-offs)
### Optional Services
#### Stripe (Billing)
* **Purpose**: Payment processing and subscription management
* **Setup**: Create account at [stripe.com](https://stripe.com)
* **Required for**: Billing features
* **Alternatives**: Remove billing features or custom payment solution
#### Monitoring & Analytics
* **Sentry**: Error tracking and performance monitoring
* **PostHog**: Product analytics and feature flags
* **Setup**: Optional but recommended for production
## Security Considerations
### Authentication & Authorization
* JWT tokens for API authentication
* Supabase Auth for user management
* Row-level security in PostgreSQL
* API key management for SDK access
### Data Protection
* HTTPS/TLS encryption in transit
* Database encryption at rest
* Environment variable security
* Secret management best practices
### Network Security
* Firewall configuration
* VPN or private networks
* API rate limiting
* CORS configuration
## Monitoring & Observability
### Application Monitoring
* Health check endpoints
* Application metrics and logs
* Error tracking with Sentry
* Performance monitoring
### Infrastructure Monitoring
* Container resource usage
* Database performance
* Network connectivity
* Storage utilization
### Alerting
* Service availability alerts
* Error rate monitoring
* Resource usage thresholds
* Custom business metrics
## Scaling Considerations
### Horizontal Scaling
* Multiple API server instances
* Load balancer configuration
* Database connection pooling
* Shared session storage
### Vertical Scaling
* Resource allocation optimization
* Database performance tuning
* Caching strategies
* CDN for static assets
### Database Scaling
* Read replicas for PostgreSQL
* ClickHouse cluster setup
* Connection pool management
* Query optimization
## Backup & Recovery
### Database Backups
* Automated Supabase backups
* ClickHouse data exports
* Point-in-time recovery
* Cross-region replication
### Application Backups
* Configuration files
* Environment variables
* Custom modifications
* SSL certificates
### Disaster Recovery
* Recovery time objectives (RTO)
* Recovery point objectives (RPO)
* Failover procedures
* Data integrity verification
## Cost Optimization
### Infrastructure Costs
* Right-size compute resources
* Use spot instances where appropriate
* Implement auto-scaling
* Monitor and optimize usage
### Service Costs
* Optimize database queries
* Implement caching strategies
* Use appropriate service tiers
* Monitor third-party service usage
## Getting Help
### Documentation
* Follow the step-by-step guides
* Check troubleshooting sections
* Review configuration examples
* Understand architecture decisions
### Community Support
* GitHub Issues for bug reports
* GitHub Discussions for questions
* Discord community for real-time help
* Stack Overflow for technical questions
### Professional Support
* Enterprise support options
* Consulting services available
* Custom deployment assistance
* Training and onboarding
## Next Steps
1. **Start with the [Backend Setup Guide](/v2/self-hosting/backend-setup)** - Get your environment running
2. **Choose your deployment method** - Docker or native development
3. **Configure external services** - Set up Supabase and ClickHouse
4. **Customize for your needs** - Modify configuration and features
5. **Plan for production** - Implement monitoring, backups, and scaling
Ready to begin? Start with our [Backend Setup Guide](/v2/self-hosting/backend-setup)!
# Advanced Configuration
Source: https://docs.agentops.ai/v2/usage/advanced-configuration
In AgentOps fashion, you only need to add one line of "code" to your `.env` file 😊
```python .env theme={null}
AGENTOPS_API_KEY=
```
Find your AgentOps API Key in your Settings > [Projects & API Keys](https://app.agentops.ai/settings/projects) page.
#### Optional settings:
```python .env theme={null}
# The AgentOps API endpoint. Defaults to https://api.agentops.ai
AGENTOPS_API_ENDPOINT=https://api.agentops.ai
# Logging level. . Defaults to INFO
AGENTOPS_LOG_LEVEL=INFO
# Write logs to file . Defaults to TRUE
AGENTOPS_LOGGING_TO_FILE=TRUE
# Whether to opt out of recording environment data. . Defaults to FALSE
AGENTOPS_ENV_DATA_OPT_OUT=FALSE
```
# Context Managers
Source: https://docs.agentops.ai/v2/usage/context-managers
Use AgentOps traces as Python context managers for automatic lifecycle management
# Context Managers
AgentOps provides native context manager support for traces, allowing you to use Python's `with` statement for automatic trace lifecycle management. This approach ensures traces are properly started and ended, even when exceptions occur.
## Basic Usage
The simplest way to use context managers is with the `start_trace()` function:
```python theme={null}
import agentops
# Initialize AgentOps
agentops.init(api_key="your-api-key")
# Use context manager for automatic trace management
with agentops.start_trace("my_workflow") as trace:
# Your code here
print("Processing data...")
# Trace automatically ends when exiting the with block
```
The trace will automatically:
* Start when entering the `with` block
* End with "Success" status when exiting normally
* End with "Error" status if an exception occurs
* Clean up resources properly in all cases
## Advanced Usage
### Traces with Tags
You can add tags to traces for better organization and filtering:
```python theme={null}
import agentops
agentops.init(api_key="your-api-key")
# Using list tags
with agentops.start_trace("data_processing", tags=["batch", "production"]):
process_batch_data()
# Using dictionary tags for more structured metadata
with agentops.start_trace("user_request", tags={
"user_id": "12345",
"request_type": "query",
"priority": "high"
}):
handle_user_request()
```
### Parallel Traces
Context managers create independent parallel traces, not parent-child relationships:
```python theme={null}
import agentops
agentops.init(api_key="your-api-key")
# Sequential parallel traces
with agentops.start_trace("task_1"):
print("Task 1 executing")
with agentops.start_trace("task_2"):
print("Task 2 executing")
# Nested context managers create parallel traces
with agentops.start_trace("outer_workflow"):
print("Outer workflow started")
with agentops.start_trace("inner_task"):
print("Inner task executing (parallel to outer)")
print("Outer workflow continuing")
```
### Exception Handling
Context managers automatically handle exceptions and set appropriate trace states:
```python theme={null}
import agentops
agentops.init(api_key="your-api-key")
# Automatic error handling
try:
with agentops.start_trace("risky_operation"):
# This will automatically set trace status to "Error"
raise ValueError("Something went wrong")
except ValueError as e:
print(f"Caught error: {e}")
# Trace has already been ended with Error status
# Graceful degradation pattern
try:
with agentops.start_trace("primary_service"):
result = call_primary_service()
except ServiceUnavailableError:
with agentops.start_trace("fallback_service"):
result = call_fallback_service()
```
### Concurrent Execution
Context managers work seamlessly with threading and asyncio:
```python Threading theme={null}
import agentops
import threading
agentops.init(api_key="your-api-key")
# With threading
def worker_function(worker_id):
with agentops.start_trace(f"worker_{worker_id}"):
# Each thread gets its own independent trace
process_work(worker_id)
threads = []
for i in range(3):
thread = threading.Thread(target=worker_function, args=(i,))
threads.append(thread)
thread.start()
for thread in threads:
thread.join()
```
```python Asyncio theme={null}
import agentops
import asyncio
agentops.init(api_key="your-api-key")
# With asyncio
async def async_task(task_id):
with agentops.start_trace(f"async_task_{task_id}"):
await asyncio.sleep(0.1) # Simulate async work
return f"result_{task_id}"
async def main():
tasks = [async_task(i) for i in range(3)]
results = await asyncio.gather(*tasks)
return results
# Run async tasks
results = asyncio.run(main())
```
## Production Patterns
### API Endpoint Monitoring
```python theme={null}
import agentops
from flask import Flask, request
app = Flask(__name__)
agentops.init(api_key="your-api-key")
@app.route('/api/process', methods=['POST'])
def process_request():
# Create trace for each API request
with agentops.start_trace("api_request", tags={
"endpoint": "/api/process",
"method": "POST",
"user_id": request.headers.get("user-id")
}):
try:
data = request.get_json()
result = process_data(data)
return {"status": "success", "result": result}
except Exception as e:
# Exception automatically sets trace to Error status
return {"status": "error", "message": str(e)}, 500
```
### Batch Processing
```python theme={null}
import agentops
agentops.init(api_key="your-api-key")
def process_batch(items):
with agentops.start_trace("batch_processing", tags={
"batch_size": len(items),
"batch_type": "data_processing"
}):
successful = 0
failed = 0
for item in items:
try:
with agentops.start_trace("item_processing", tags={
"item_id": item.get("id"),
"item_type": item.get("type")
}):
process_item(item)
successful += 1
except Exception as e:
failed += 1
print(f"Failed to process item {item.get('id')}: {e}")
print(f"Batch completed: {successful} successful, {failed} failed")
```
### Retry Logic
```python theme={null}
import agentops
import time
agentops.init(api_key="your-api-key")
def retry_operation(operation_name, max_retries=3):
for attempt in range(max_retries):
try:
with agentops.start_trace(f"{operation_name}_attempt_{attempt + 1}", tags={
"operation": operation_name,
"attempt": attempt + 1,
"max_retries": max_retries
}):
# Your operation here
result = perform_operation()
return result # Success - exit retry loop
except Exception as e:
if attempt < max_retries - 1:
wait_time = 2 ** attempt # Exponential backoff
print(f"Attempt {attempt + 1} failed: {e}. Retrying in {wait_time}s...")
time.sleep(wait_time)
else:
print(f"All {max_retries} attempts failed")
raise
```
## Backward Compatibility
Context managers are fully backward compatible with existing AgentOps code patterns:
```python Manual Management theme={null}
import agentops
agentops.init(api_key="your-api-key")
# Manual trace management (legacy)
trace = agentops.start_trace("manual_trace")
# ... your code ...
agentops.end_trace(trace, "Success")
```
```python Context Manager theme={null}
import agentops
agentops.init(api_key="your-api-key")
# Context manager (new, recommended)
with agentops.start_trace("context_managed_trace") as trace:
# ... your code ...
pass # Automatically ended
```
```python Property Access theme={null}
import agentops
agentops.init(api_key="your-api-key")
# Accessing trace properties
with agentops.start_trace("property_access") as trace:
span = trace.span # Access underlying span
trace_id = trace.span.get_span_context().trace_id
```
```python Mixed Usage theme={null}
import agentops
agentops.init(api_key="your-api-key")
# Mixed usage
trace = agentops.start_trace("mixed_usage")
try:
with trace: # Use existing trace as context manager
# ... your code ...
pass
except Exception:
agentops.end_trace(trace, "Error")
```
## Examples
For complete working examples, see the following files in the AgentOps repository:
Simple context manager patterns and error handling
Sequential, nested, and concurrent trace patterns
Exception handling, retry patterns, and graceful degradation
API endpoints, batch processing, microservices, and monitoring
These examples demonstrate real-world usage patterns and best practices for using AgentOps context managers in production applications.
## API Reference
For detailed API information, see the [SDK Reference](/v2/usage/sdk-reference#trace-management) documentation.
# Dashboard
Source: https://docs.agentops.ai/v2/usage/dashboard-info
Visualize your AgentOps analysis.
## Insights Dashboard
You need better insights to turn error-prone AI into stable workflows. Here's your new best friend.
### Session Drilldown
Here you will find a list of all of your previously recorded sessions and useful data about each such as total execution time.
You also get helpful debugging info such as any SDK versions you were on if you're building on a supported agent framework like Crew or AutoGen.
LLM calls are presented as a familiar chat history view, and charts give you a breakdown of the types of events that were called and how long they took.
Find any past sessions from your Session Drawer.
Most powerful of all is the Session Waterfall. On the left, a time visualization of all your LLM calls, Action events, Tool calls, and Errors.
On the right, specific details about the event you've selected on the waterfall. For instance the exact prompt and completion for a given LLM call.
Most of which has been automatically recorded for you.
### Session Overview
View a meta-analysis of all of your sessions in a single view.
# Manual Trace Control
Source: https://docs.agentops.ai/v2/usage/manual-trace-control
Advanced trace management with start_trace and end_trace methods
## Basic Manual Trace Control
### Starting and Ending Traces
The most basic form of manual trace control involves starting a trace, executing your code, and then ending the trace with a specific state:
```python theme={null}
import agentops
# Initialize without automatic session creation
agentops.init("your-api-key", auto_start_session=False)
# Start a trace manually
trace = agentops.start_trace("my-workflow")
try:
# Your application logic here
result = perform_some_operation()
# End the trace successfully
agentops.end_trace(trace, "Success")
except Exception as e:
# End the trace with failure state
agentops.end_trace(trace, "Indeterminate")
```
### Trace Names and Tags
You can provide meaningful names and tags when starting traces:
```python theme={null}
# Start a trace with custom name and tags
trace = agentops.start_trace(
trace_name="customer-service-workflow",
tags=["customer-123", "priority-high", "support"]
)
```
### Batch Processing with Selective Trace Ending
For batch processing scenarios, you can selectively end traces based on processing results:
```python theme={null}
import agentops
# Initialize AgentOps
agentops.init("your-api-key", auto_start_session=False)
# Sample batch items to process
batch_items = [
{"id": 1, "data": "item_1_data", "valid": True},
{"id": 2, "data": "item_2_data", "valid": False},
{"id": 3, "data": "item_3_data", "valid": True},
]
@agentops.operation(name="process_item")
def process_item(item):
"""Simulate processing an item"""
if not item.get("valid", False):
raise ValueError(f"Invalid item: {item['id']}")
return {"processed": True, "result": f"Processed {item['data']}"}
# Start traces for batch items
for i, item in enumerate(batch_items):
trace = agentops.start_trace(f"batch_item_{i+1}")
try:
result = process_item(item)
if result.get("processed"):
agentops.end_trace(trace, "Success")
else:
agentops.end_trace(trace, "Indeterminate")
except Exception as e:
agentops.end_trace(trace, "Error")
```
## Updating Trace Metadata During Execution
You can update metadata on running traces at any point during execution using the `update_trace_metadata` function. This is useful for adding context, tracking progress, or storing intermediate results.
### Basic Metadata Updates
```python theme={null}
import agentops
# Initialize AgentOps
agentops.init("your-api-key", auto_start_session=False)
# Start a trace with initial tags
trace = agentops.start_trace("ai-agent-workflow", tags=["startup", "initialization"])
# Your AI agent code runs here...
process_user_request()
# Update metadata with results
agentops.update_trace_metadata({
"operation_name": "AI Agent Processing Complete",
"stage": "completed",
"response_quality": "high",
"tags": ["ai-agent", "completed", "success"] # Tags show current status
})
# End the trace
agentops.end_trace(trace, "Success")
```
### Semantic Convention Support
The function automatically maps user-friendly keys to semantic conventions when possible:
```python theme={null}
# These keys will be mapped to semantic conventions
agentops.update_trace_metadata({
"operation_name": "AI Agent Data Processing",
"tags": ["production", "batch-job", "gpt-4"], # Maps to core.tags
"agent_name": "DataProcessorAgent", # Maps to agent.name
"workflow_name": "Intelligent ETL Pipeline", # Maps to workflow.name
})
```
### Advanced Metadata with Custom Prefix
You can specify a custom prefix for your metadata attributes:
```python theme={null}
# Use a custom prefix for business-specific metadata
agentops.update_trace_metadata({
"customer_id": "CUST_456",
"order_value": 99.99,
"payment_method": "credit_card",
"agent_interaction": "customer_support"
}, prefix="business")
# Results in:
# business.customer_id = "CUST_456"
# business.order_value = 99.99
# business.payment_method = "credit_card"
```
### Real-World Example: Progress Tracking
Here's how to use metadata updates to track progress through a complex workflow:
```python theme={null}
import agentops
from agentops.sdk.decorators import operation
agentops.init(auto_start_session=False)
@operation
def process_batch(batch_data):
# Simulate batch processing
return f"Processed {len(batch_data)} items"
def run_etl_pipeline(data_batches):
"""ETL pipeline with progress tracking via metadata"""
trace = agentops.start_trace("etl-pipeline", tags=["data-processing"])
total_batches = len(data_batches)
processed_records = 0
# Initial metadata
agentops.update_trace_metadata({
"operation_name": "ETL Pipeline Execution",
"pipeline_stage": "starting",
"total_batches": total_batches,
"processed_batches": 0,
"processed_records": 0,
"estimated_completion": "calculating...",
"tags": ["etl", "data-processing", "async-operation"]
})
try:
for i, batch in enumerate(data_batches):
# Update progress
agentops.update_trace_metadata({
"pipeline_stage": "processing",
"current_batch": i + 1,
"processed_batches": i,
"progress_percentage": round((i / total_batches) * 100, 2)
})
# Process the batch
result = process_batch(batch)
processed_records += len(batch)
# Update running totals
agentops.update_trace_metadata({
"processed_records": processed_records,
"last_batch_result": result
})
# Final metadata update
agentops.update_trace_metadata({
"operation_name": "ETL Pipeline Completed",
"pipeline_stage": "completed",
"processed_batches": total_batches,
"progress_percentage": 100.0,
"completion_status": "success",
"total_execution_time": "calculated_automatically",
"tags": ["etl", "completed", "success"]
})
agentops.end_trace(trace, "Success")
except Exception as e:
# Error metadata
agentops.update_trace_metadata({
"operation_name": "ETL Pipeline Failed",
"pipeline_stage": "failed",
"error_message": str(e),
"completion_status": "error",
"failed_at_batch": i + 1 if 'i' in locals() else 0,
"tags": ["etl", "failed", "error"]
})
agentops.end_trace(trace, "Error")
raise
# Example usage
data_batches = [
["record1", "record2", "record3"],
["record4", "record5"],
["record6", "record7", "record8", "record9"]
]
run_etl_pipeline(data_batches)
```
### Supported Data Types
The `update_trace_metadata` function supports various data types:
```python theme={null}
agentops.update_trace_metadata({
"operation_name": "Multi-type Data Example",
"successful_operation": True,
"tags": ["example", "demo", "multi-agent"],
"processing_steps": ["validation", "transformation", "output"]
})
# Note: Lists are automatically converted to JSON strings for OpenTelemetry compatibility
```
## Integration with Decorators
Manual trace control works seamlessly with AgentOps decorators:
```python theme={null}
import agentops
from agentops.sdk.decorators import agent, operation, tool
agentops.init("your-api-key", auto_start_session=False)
@agent
class CustomerServiceAgent:
@operation
def analyze_request(self, request):
return f"Analyzed: {request}"
@tool(cost=0.02)
def lookup_customer(self, customer_id):
return f"Customer data for {customer_id}"
# Manual trace with decorated components
trace = agentops.start_trace("customer-service")
try:
agent = CustomerServiceAgent()
customer_data = agent.lookup_customer("CUST_123")
analysis = agent.analyze_request("billing issue")
agentops.end_trace(trace, "Success")
except Exception as e:
agentops.end_trace(trace, "Error")
```
## Real-World Example
Here's a comprehensive example showing manual trace control in a customer service application:
```python theme={null}
import agentops
from agentops.sdk.decorators import agent, operation, tool
from openai import OpenAI
agentops.init(auto_start_session=False)
client = OpenAI()
@operation
def analyze_sentiment(text):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": f"Analyze sentiment: {text}"}]
)
return response.choices[0].message.content.strip()
@tool(cost=0.01)
def lookup_order(order_id):
return f"Order {order_id} details"
def process_customer_requests(requests):
"""Process multiple customer requests with individual trace tracking"""
results = []
for i, request in enumerate(requests):
trace = agentops.start_trace(
f"customer_request_{i+1}",
tags=["customer-service", request.get("priority", "normal")]
)
try:
sentiment = analyze_sentiment(request["message"])
if "order" in request:
order_info = lookup_order(request["order"])
if "positive" in sentiment.lower() or "neutral" in sentiment.lower():
agentops.end_trace(trace, "Success")
results.append({"status": "resolved", "sentiment": sentiment})
else:
agentops.end_trace(trace, "Escalation_Required")
results.append({"status": "escalated", "sentiment": sentiment})
except Exception as e:
agentops.end_trace(trace, "Error")
results.append({"status": "error", "error": str(e)})
return results
customer_requests = [
{"message": "I love this product!", "priority": "low"},
{"message": "My order is completely wrong!", "order": "12345", "priority": "high"},
{"message": "When will my package arrive?", "order": "67890", "priority": "normal"}
]
results = process_customer_requests(customer_requests)
print(f"Processed {len(results)} customer requests")
```
This example demonstrates:
* Individual trace management for each customer request
* Integration with decorated agents and tools
* Different end states based on business logic
* Proper error handling with appropriate trace states
* Use of tags for categorization
# MCP Docs
Source: https://docs.agentops.ai/v2/usage/mcp-docs
Chat with the AgentOps documentation directly from your IDE using the Mintlify MCP Docs Server.
Looking for the AgentOps API tools instead? See the MCP Server guide.
# MCP Docs Server
The **Mintlify MCP Docs Server** gives you and your coding agents instant, programmatic access to every page in the AgentOps docs. It works in **Cursor**, **Windsurf**, **VS Code**, **Zed**, **Claude Code**, or any other tool that speaks the [Model Context Protocol](https://modelcontextprotocol.io/).
## Installation
```bash theme={null}
npx mint-mcp add agentops
```
Run the command above in any folder and your IDE will automatically register the docs server. No extra configuration required.
## How it works
When you (or your in-editor AI assistant) ask a question, the IDE sends a request to the MCP Docs Server. The server searches the AgentOps docs and returns the most relevant sections so the assistant can craft a precise answer.
> **Example prompt**
> "How do I record custom spans with the `@operation` decorator?"
## What you can ask
Here are a few ideas to get you started:
### Add features
* "Add a chat interface with streaming support to my app"
* "Instrument my agent with the `@trace` decorator"
### Ask about integrations
* "How do I integrate AgentOps with the Vercel AI SDK?"
* "Show me a working example for CrewAI"
### Debug or update existing code
* "My trace isn't showing spans—what could be wrong?"
* "How do I customize the styling of the Session Waterfall?"
If the answer lives in the docs, the server will find it.
## Common issues
### Server not starting
1. Ensure `npx` is installed and working.
2. Check for other MCP servers running on the same port.
3. Verify your configuration file syntax.
4. On Windows, confirm Node.js and npm are installed.
### Tool calls failing
1. Restart the MCP Docs Server and/or your IDE.
2. Update to the latest version of your IDE.
3. Confirm that the AgentOps docs server appears in your IDE’s list of MCP servers.
# MCP Server
Source: https://docs.agentops.ai/v2/usage/mcp-server
MCP server for accessing AgentOps trace and span data
# MCP Server
AgentOps provides a [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) server that exposes the Public API as a set of tools for AI assistants. This allows AI models to directly query your AgentOps data during conversations and debug AI agents with greater context.
### Configuration & Installation
Add the AgentOps MCP to your MCP client's configuration file.
**npx configuration:**
```json theme={null}
{
"mcpServers": {
"agentops": {
"command": "npx",
"args": [
"agentops-mcp"
],
"env": {
"AGENTOPS_API_KEY": ""
}
}
}
}
```
**Cursor Deeplink:**
Add the AgentOps MCP to Cursor with Deeplink.
[](https://cursor.com/install-mcp?name=agentops\&config=eyJjb21tYW5kIjoibnB4IGFnZW50b3BzLW1jcCIsImVudiI6eyJBR0VOVE9QU19BUElfS0VZIjoiIn19)
**Smithery:**
To install agentops-mcp for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@AgentOps-AI/agentops-mcp):
```bash theme={null}
npx -y @smithery/cli install @AgentOps-AI/agentops-mcp --client claude
```
### Available Tools
The MCP server exposes the following tools that mirror the Public API endpoints:
#### `auth`
Authorize using an AgentOps project API key.
* **Parameters**: `api_key` (string) - Your AgentOps project API key
* **Usage**: The server will automatically prompt for authentication when needed
#### `get_trace`
Get trace information by ID.
* **Parameters**: `trace_id` (string) - The trace identifier
* **Returns**: Trace details and metrics
#### `get_span`
Get span information by ID.
* **Parameters**: `span_id` (string) - The span identifier
* **Returns**: Span attributes and metrics
#### `get_complete_trace`
Get complete trace information by ID.
* **Parameters**: `span_id` (string) - The trace identifier
* **Returns**: Complete trace and associated span details.
### Environment Variables
The MCP server supports the following environment variables:
* `AGENTOPS_API_KEY`: Your AgentOps project API key
* `HOST`: API endpoint (defaults to `https://api.agentops.ai`)
# Public API
Source: https://docs.agentops.ai/v2/usage/public-api
Read-only HTTP API for accessing AgentOps trace and span data
# Public API
The AgentOps Public API provides read-only HTTP access to your monitoring data. This RESTful API allows you to retrieve trace information, span details, and metrics from any application or framework, regardless of programming language.
This is a **read-only API** for accessing existing data. To create traces and spans, use the [AgentOps SDK](/v2/quickstart) or our instrumentation libraries.
## Base URL
All API requests should be made to:
```
https://api.agentops.ai
```
## Authentication
The API uses JWT token authentication. You'll need to exchange your API key for a JWT token first.
### Get Access Token
Convert your API key to a bearer token for API access.
```bash curl theme={null}
curl -X POST https://api.agentops.ai/public/v1/auth/access_token \
-H "Content-Type: application/json" \
-d '{
"api_key": "YOUR_API_KEY"
}'
```
```json Response theme={null}
{
"bearer": "eyJhbGciOiJIUzI1NiIs..."
}
```
```json Error Response theme={null}
{
"detail": [
{
"loc": ["body", "api_key"],
"msg": "field required",
"type": "value_error.missing"
}
]
}
```
**Important**: Bearer tokens are valid for **30 days**. Store them securely and refresh before expiration.
## Core Endpoints
### Get Project Information
Retrieve details about your current project.
```bash curl theme={null}
curl -X GET https://api.agentops.ai/public/v1/project \
-H "Authorization: Bearer YOUR_BEARER_TOKEN"
```
```json Response theme={null}
{
"id": "proj_abc123",
"name": "My AI Project",
"environment": "production"
}
```
This endpoint returns information about the project associated with your API key.
### Get Trace Details
Retrieve comprehensive information about a specific trace, including all its spans.
```bash curl theme={null}
curl -X GET https://api.agentops.ai/public/v1/traces/trace_123 \
-H "Authorization: Bearer YOUR_BEARER_TOKEN"
```
```json Response theme={null}
{
"trace_id": "trace_123",
"project_id": "proj_abc123",
"tags": ["production", "chatbot", "gpt-4"],
"spans": [
{
"span_id": "span_456",
"parent_span_id": null,
"span_name": "User Query Processing",
"span_kind": "SPAN_KIND_INTERNAL",
"start_time": "2024-03-14T12:00:00.000Z",
"end_time": "2024-03-14T12:00:05.000Z",
"duration": 5000,
"status_code": "STATUS_CODE_OK",
"status_message": "Success"
},
{
"span_id": "span_789",
"parent_span_id": "span_456",
"span_name": "OpenAI GPT-4 Call",
"span_kind": "SPAN_KIND_CLIENT",
"start_time": "2024-03-14T12:00:01.000Z",
"end_time": "2024-03-14T12:00:03.000Z",
"duration": 2000,
"status_code": "STATUS_CODE_OK",
"status_message": "Success"
}
]
}
```
```json Error Response theme={null}
{
"detail": [
{
"loc": ["path", "trace_id"],
"msg": "trace not found",
"type": "value_error.not_found"
}
]
}
```
**Parameters:**
* `trace_id` (path, required): The unique identifier of the trace
**Response Fields:**
* `trace_id`: Unique trace identifier
* `project_id`: Associated project ID
* `tags`: Array of tags associated with the trace
* `spans`: Array of span summaries within the trace
### Get Trace Metrics
Retrieve aggregated metrics and statistics for a trace.
```bash curl theme={null}
curl -X GET https://api.agentops.ai/public/v1/traces/trace_123/metrics \
-H "Authorization: Bearer YOUR_BEARER_TOKEN"
```
```json Response theme={null}
{
"span_count": 5,
"trace_count": 1,
"success_count": 4,
"fail_count": 1,
"indeterminate_count": 0,
"prompt_tokens": 150,
"completion_tokens": 75,
"cache_read_input_tokens": 0,
"reasoning_tokens": 25,
"total_tokens": 250,
"prompt_cost": "0.0030",
"completion_cost": "0.0015",
"average_cost_per_trace": "0.0045",
"total_cost": "0.0045"
}
```
**Metrics Explained:**
* `span_count`: Total number of spans in the trace
* `success_count`/`fail_count`/`indeterminate_count`: Status breakdown
* `*_tokens`: Token usage breakdown by type
* `*_cost`: Cost calculations in USD
### Get Span Details
Retrieve comprehensive information about a specific span, including full attribute payloads.
```bash curl theme={null}
curl -X GET https://api.agentops.ai/public/v1/spans/span_456 \
-H "Authorization: Bearer YOUR_BEARER_TOKEN"
```
```json Response theme={null}
{
"span_id": "span_456",
"parent_span_id": null,
"span_name": "User Query Processing",
"span_kind": "SPAN_KIND_INTERNAL",
"service_name": "chatbot-service",
"start_time": "2024-03-14T12:00:00.000Z",
"end_time": "2024-03-14T12:00:05.000Z",
"duration": 5000,
"status_code": "STATUS_CODE_OK",
"status_message": "Success",
"attributes": {
"llm.model": "gpt-4-turbo",
"llm.prompt": "What is the weather like today?",
"llm.completion": "I need your location to provide weather information.",
"llm.usage.prompt_tokens": 50,
"llm.usage.completion_tokens": 25
},
"resource_attributes": {
"service.name": "chatbot-service",
"service.version": "1.2.3"
},
"span_attributes": {
"user_id": "user_123",
"session_id": "session_456"
}
}
```
**Parameters:**
* `span_id` (path, required): The unique identifier of the span
**Response Fields:**
* `attributes`: Core span data (LLM calls, tool usage, etc.)
* `resource_attributes`: Service and infrastructure metadata
* `span_attributes`: Custom attributes set by your application
### Get Span Metrics
Retrieve detailed metrics for a specific span.
```bash curl theme={null}
curl -X GET https://api.agentops.ai/public/v1/spans/span_456/metrics \
-H "Authorization: Bearer YOUR_BEARER_TOKEN"
```
```json Response theme={null}
{
"total_tokens": 75,
"prompt_tokens": 50,
"completion_tokens": 25,
"cache_read_input_tokens": 0,
"reasoning_tokens": 0,
"success_tokens": 75,
"fail_tokens": 0,
"indeterminate_tokens": 0,
"prompt_cost": "0.0015",
"completion_cost": "0.0005",
"total_cost": "0.0020"
}
```
## MCP Server
AgentOps provides a [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) server that exposes the Public API as tools for AI assistants. This allows AI models to directly query your AgentOps data during conversations.
### Configuration
Create an MCP server configuration file (typically `mcp_config.json`):
**Python-based configuration:**
```json theme={null}
{
"mcpServers": {
"agentops": {
"command": "python",
"args": ["-m", "agentops.mcp.server"],
"env": {
"AGENTOPS_API_KEY": "your-api-key-here"
}
}
}
}
```
**Docker-based configuration:**
```json theme={null}
{
"mcpServers": {
"agentops": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"AGENTOPS_API_KEY",
"agentops/agentops-mcp:latest"
],
"env": {
"AGENTOPS_API_KEY": "your-agentops-api-key-here"
}
}
}
}
```
### Available Tools
The MCP server exposes the following tools that mirror the Public API endpoints:
#### `auth`
Authorize using an AgentOps project API key.
* **Parameters**: `api_key` (string) - Your AgentOps project API key
* **Usage**: The server will automatically prompt for authentication when needed
#### `get_project`
Get details about the current project.
* **Parameters**: None
* **Returns**: Project information including ID, name, and environment
#### `get_trace`
Get comprehensive trace information by ID.
* **Parameters**: `trace_id` (string) - The trace identifier
* **Returns**: Trace details with associated spans
#### `get_trace_metrics`
Get aggregated metrics for a specific trace.
* **Parameters**: `trace_id` (string) - The trace identifier
* **Returns**: Cost, token usage, and performance metrics
#### `get_span`
Get detailed span information by ID.
* **Parameters**: `span_id` (string) - The span identifier
* **Returns**: Complete span data including attributes
#### `get_span_metrics`
Get metrics for a specific span.
* **Parameters**: `span_id` (string) - The span identifier
* **Returns**: Span-specific cost and token metrics
### Environment Variables
The MCP server supports the following environment variables:
* `AGENTOPS_API_KEY`: Your AgentOps project API key
* `HOST`: API endpoint (defaults to `https://api.agentops.ai`)
# Recording Operations
Source: https://docs.agentops.ai/v2/usage/recording-operations
Track operations and LLM calls in your agent applications.
AgentOps makes it easy to track operations and interactions in your AI applications with minimal setup.
## Basic Setup
The simplest way to get started with AgentOps is to initialize it at the beginning of your application:
```python theme={null}
import agentops
# Initialize AgentOps with your API key
agentops.init("your-api-key")
```
That's it! This single line of code will:
* Automatically create a session for tracking your application run
* Intercept and track all LLM calls to supported providers (OpenAI, Anthropic, etc.)
* Record relevant metrics such as token counts, costs, and response times
You can also set a custom trace name during initialization:
```python theme={null}
import agentops
# Initialize with custom trace name
agentops.init("your-api-key", trace_name="my-custom-workflow")
```
## Automatic Instrumentation
AgentOps automatically instruments calls to popular LLM providers without requiring any additional code:
```python theme={null}
import agentops
from openai import OpenAI
# Initialize AgentOps
agentops.init("your-api-key")
# Make LLM calls as usual - AgentOps will track them automatically
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello, world!"}]
)
```
This works with many popular LLM providers including:
* OpenAI
* Anthropic
* Google (Gemini)
* Cohere
* And more
## Advanced: Using Decorators for Detailed Instrumentation
For more detailed tracking, AgentOps provides decorators that allow you to explicitly instrument your code. This is optional but can provide more context in the dashboard.
### `@operation` Decorator
The `@operation` decorator helps track specific operations in your application:
```python theme={null}
from agentops.sdk.decorators import operation
@operation
def process_data(data):
# Process the data
return result
```
### `@agent` Decorator
If you use agent classes, you can track them with the `@agent` decorator:
```python theme={null}
from agentops.sdk.decorators import agent, operation
@agent
class ResearchAgent:
@operation
def search(self, query):
# Implementation of search
return f"Results for: {query}"
def research_workflow(topic):
agent = ResearchAgent()
results = agent.search(topic)
return results
results = research_workflow("quantum computing")
```
### `@tool` Decorator
Track tool usage and costs with the `@tool` decorator. You can specify costs to get total cost tracking directly in your dashboard summary:
```python theme={null}
from agentops.sdk.decorators import tool
@tool(cost=0.05)
def web_search(query):
# Tool implementation
return f"Search results for: {query}"
@tool
def calculator(expression):
# Tool without cost tracking
return eval(expression)
```
### `@trace` Decorator
Create custom traces to group related operations using the `@trace` decorator. This is the recommended approach for most applications:
```python theme={null}
import agentops
from agentops.sdk.decorators import trace, agent, operation
# Initialize AgentOps without auto-starting session since we use @trace
agentops.init("your-api-key", auto_start_session=False)
@trace(name="customer-service-workflow", tags=["customer-support"])
def customer_service_workflow(customer_id):
agent = ResearchAgent()
results = agent.search(f"customer {customer_id}")
return results
```
## Best Practices
1. **Keep it Simple**: For most applications, just initializing AgentOps with `agentops.init()` is sufficient.
2. **Use @trace for Custom Workflows**: When you need to group operations, use the `@trace` decorator instead of manual trace management.
3. **Meaningful Names and Tags**: When using decorators, choose descriptive names and relevant tags to make them easier to identify in the dashboard.
4. **Cost Tracking**: Use the `@tool` decorator with cost parameters to track tool usage costs in your dashboard.
# SDK Reference
Source: https://docs.agentops.ai/v2/usage/sdk-reference
All functions and classes exposed in the top layer of the SDK
# SDK Reference
This reference documents the functions and classes available with `import agentops` for the Python SDK. The AgentOps SDK is designed for easy integration with your agent applications, offering both simple auto-instrumentation and more detailed manual tracing capabilities.
This documentation covers the Python SDK. A TypeScript/JavaScript SDK is also available - see our [TypeScript SDK guide](/v2/usage/typescript-sdk) for details.
## Core Functions
These are the primary functions you'll use to initialize and configure AgentOps in your application.
### `init()`
Initializes the AgentOps SDK and automatically starts tracking your application.
**Parameters**:
* `api_key` (str, optional): API Key for AgentOps services. If not provided, the key will be read from the `AGENTOPS_API_KEY` environment variable.
* `endpoint` (str, optional): The endpoint for the AgentOps service. If not provided, will be read from the `AGENTOPS_API_ENDPOINT` environment variable. Defaults to '[https://api.agentops.ai](https://api.agentops.ai)'.
* `app_url` (str, optional): The dashboard URL for the AgentOps app. If not provided, will be read from the `AGENTOPS_APP_URL` environment variable. Defaults to '[https://app.agentops.ai](https://app.agentops.ai)'.
* `max_wait_time` (int, optional): The maximum time to wait in milliseconds before flushing the queue. Defaults to 5,000 (5 seconds).
* `max_queue_size` (int, optional): The maximum size of the event queue. Defaults to 512.
* `default_tags` (List\[str], optional): Default tags for the sessions that can be used for grouping or sorting later (e.g. \["GPT-4"]).
* `tags` (List\[str], optional): **\[Deprecated]** Use `default_tags` instead. Will be removed in v4.0.
* `instrument_llm_calls` (bool, optional): Whether to instrument LLM calls automatically. Defaults to True.
* `auto_start_session` (bool, optional): Whether to start a session automatically when the client is created. Set to False if running in a Jupyter Notebook. Defaults to True.
* `auto_init` (bool, optional): Whether to automatically initialize the client on import. Defaults to True.
* `skip_auto_end_session` (bool, optional): Don't automatically end session based on your framework's decision-making. Defaults to False.
* `env_data_opt_out` (bool, optional): Whether to opt out of collecting environment data. Defaults to False.
* `log_level` (str, int, optional): The log level to use for the client. Defaults to 'INFO'.
* `fail_safe` (bool, optional): Whether to suppress errors and continue execution when possible. Defaults to False.
* `exporter_endpoint` (str, optional): Endpoint for the exporter. If not provided, will be read from the `AGENTOPS_EXPORTER_ENDPOINT` environment variable. Defaults to '[https://otlp.agentops.ai/v1/traces](https://otlp.agentops.ai/v1/traces)'.
* `export_flush_interval` (int, optional): Time interval in milliseconds between automatic exports of telemetry data. Defaults to 1000.
* `trace_name` (str, optional): Custom name for the automatically created trace. If not provided, a default name will be used.
**Returns**:
* If `auto_start_session=True`, returns the created Session object. Otherwise, returns None.
**Example**:
```python theme={null}
import agentops
# Basic initialization with automatic session creation
agentops.init("your-api-key")
# Initialize with custom trace name
agentops.init("your-api-key", trace_name="my-workflow")
```
### `configure()`
Updates client configuration after initialization. Supports the same parameters as `init()`.
**Parameters**:
* `api_key` (str, optional): API Key for AgentOps services.
* `endpoint` (str, optional): The endpoint for the AgentOps service.
* `app_url` (str, optional): The dashboard URL for the AgentOps app.
* `max_wait_time` (int, optional): Maximum time to wait in milliseconds before flushing the queue.
* `max_queue_size` (int, optional): Maximum size of the event queue.
* `default_tags` (List\[str], optional): Default tags for the sessions.
* `instrument_llm_calls` (bool, optional): Whether to instrument LLM calls.
* `auto_start_session` (bool, optional): Whether to start a session automatically.
* `auto_init` (bool, optional): Whether to automatically initialize the client on import.
* `skip_auto_end_session` (bool, optional): Don't automatically end session.
* `env_data_opt_out` (bool, optional): Whether to opt out of collecting environment data.
* `log_level` (str, int, optional): The log level to use for the client.
* `fail_safe` (bool, optional): Whether to suppress errors and continue execution.
* `exporter` (object, optional): Custom span exporter for OpenTelemetry trace data.
* `processor` (object, optional): Custom span processor for OpenTelemetry trace data.
* `exporter_endpoint` (str, optional): Endpoint for the exporter.
* `export_flush_interval` (int, optional): Time interval in milliseconds between automatic exports of telemetry data.
* `trace_name` (str, optional): Custom name for traces.
**Example**:
```python theme={null}
import agentops
# Initialize first
agentops.init()
# Later, update configuration
agentops.configure(
max_wait_time=10000,
max_queue_size=200,
default_tags=["production", "gpt-4"],
trace_name="production-workflow"
)
```
### `get_client()`
Gets the singleton client instance. Most users won't need to use this function directly.
**Returns**:
* The AgentOps client instance.
## Trace Management
These functions help you manage the lifecycle of tracking traces.
### `start_trace()`
Starts a new AgentOps trace manually. This is useful when you've disabled automatic session creation or need multiple separate traces.
**Parameters**:
* `trace_name` (str, optional): Name for the trace. If not provided, a default name will be used.
* `tags` (Union\[Dict\[str, Any], List\[str]], optional): Optional tags to attach to the trace, useful for filtering in the dashboard. Can be a list of strings or a dict of key-value pairs.
**Returns**:
* TraceContext object representing the started trace.
**Example**:
```python theme={null}
import agentops
# Initialize without auto-starting a session
agentops.init("your-api-key", auto_start_session=False)
# Start a trace manually
trace = agentops.start_trace("customer-service-workflow", tags=["customer-query"])
```
### `end_trace()`
Ends a specific trace or all active traces.
**Parameters**:
* `trace` (TraceContext, optional): The specific trace to end. If not provided, all active traces will be ended.
* `end_state` (str, optional): The end state for the trace(s). You can use any descriptive string that makes sense for your application (e.g., "Success", "Indeterminate", "Error", "Timeout", etc.).
**Example**:
```python theme={null}
import agentops
# End a specific trace
trace = agentops.start_trace("my-workflow")
# ... your code ...
agentops.end_trace(trace, "Success")
# End all active traces
agentops.end_trace(end_state="Emergency_Shutdown")
```
### `update_trace_metadata()`
Updates metadata on the currently running trace. This is useful for adding context, tracking progress, or storing intermediate results during trace execution.
**Parameters**:
* `metadata` (Dict\[str, Any]): Dictionary of key-value pairs to set as trace metadata. Values must be strings, numbers, booleans, or lists of these types. Lists are automatically converted to JSON string representation.
* `prefix` (str, optional): Prefix for metadata attributes. Defaults to "trace.metadata". Ignored for semantic convention attributes.
**Returns**:
* `bool`: True if metadata was successfully updated, False otherwise.
**Features**:
* **Semantic Convention Support**: User-friendly keys like "tags", "agent\_name", "workflow\_name" are automatically mapped to OpenTelemetry semantic conventions.
* **Custom Attributes**: Non-semantic keys are prefixed with the specified prefix (default: "trace.metadata").
* **Type Safety**: Validates input types and converts lists to JSON strings for OpenTelemetry compatibility.
* **Error Handling**: Returns boolean success indicator and logs warnings for invalid data.
**Example**:
```python theme={null}
import agentops
from agentops import update_trace_metadata
# Initialize and start trace with initial tags
agentops.init(auto_start_session=False)
trace = agentops.start_trace("ai-workflow", tags=["startup", "initialization"])
# Your code here...
# Update metadata mid-run with new tags and operation info
update_trace_metadata({
"operation_name": "OpenAI GPT-4o-mini",
"tags": ["ai-agent", "processing", "gpt-4"], # Updates tags
"status": "processing"
})
# End the trace
agentops.end_trace(trace, "Success")
```
For detailed examples and use cases, see [Manual Trace Control](/v2/usage/manual-trace-control#updating-trace-metadata-during-execution).
## Decorators for Detailed Instrumentation
For more granular control, AgentOps provides decorators that explicitly track different components of your application. **The `@trace` decorator is the recommended approach for creating custom traces**, especially in multi-threaded environments. These decorators are imported from `agentops.sdk.decorators`.
```python theme={null}
import agentops
from agentops.sdk.decorators import trace, agent, operation, tool
# Initialize without automatic session creation
agentops.init("your-api-key", auto_start_session=False)
# Create and run a trace using the decorator
@trace
def my_workflow():
# Your workflow code here
pass
# Run the workflow, which creates and manages the trace
my_workflow()
```
### Available Decorators
* `@trace`: Creates a trace span for grouping related operations
* `@agent`: Creates an agent span for tracking agent operations
* `@operation` / `@task`: Creates operation/task spans for tracking specific operations (these are aliases)
* `@workflow`: Creates workflow spans for organizing related operations
* `@tool`: Creates tool spans for tracking tool usage and cost in agent operations. Supports cost parameter for tracking tool usage costs.
**Tool Decorator Example**:
```python theme={null}
from agentops.sdk.decorators import tool
@tool(cost=0.05)
def web_search(query):
# Tool implementation with cost tracking
return f"Search results for: {query}"
@tool
def calculator(expression):
# Tool without cost tracking
return eval(expression)
```
See [Decorators](/v2/concepts/decorators) for more detailed documentation on using these decorators.
## Legacy Functions
The following functions are **deprecated** and will be removed in v4.0. They are maintained for backward compatibility with older versions of the SDK and integrations. New code should use the functions and decorators described above instead. When used, these functions will log deprecation warnings.
* `start_session()`: **Deprecated.** Legacy function for starting sessions. Use `@trace` decorator or `start_trace()` instead.
* `end_session()`: **Deprecated.** Legacy function for ending sessions. Use `end_trace()` instead.
* `record(event)`: **Deprecated.** Legacy function to record an event. Replaced by decorator-based tracing.
* `track_agent()`: **Deprecated.** Legacy decorator for marking agents. Replaced by the `@agent` decorator.
* `track_tool()`: **Deprecated.** Legacy decorator for marking tools. Replaced by the `@tool` decorator.
* `ToolEvent()`, `ErrorEvent()`, `ActionEvent()`, `LLMEvent()`: **Deprecated.** Legacy event types. Replaced by automatic instrumentation and decorators.
# Trace Decorator
Source: https://docs.agentops.ai/v2/usage/trace-decorator
Create custom traces with the @trace decorator
## Basic Usage
### Simple Trace Creation
The `@trace` decorator automatically creates a trace span that encompasses the entire function execution. You can optionally specify custom names and tags to better organize and categorize your traces:
```python theme={null}
from agentops.sdk.decorators import trace
import agentops
# Initialize AgentOps
agentops.init("your-api-key", auto_start_session=False)
@trace(name="customer-workflow", tags=["production", "customer-service"])
def my_workflow():
"""A simple workflow wrapped in a trace"""
print("🚀 Starting customer workflow...")
print("📋 Processing customer request...")
# Your application logic here
print("✅ Customer workflow completed successfully!")
return "Workflow completed"
# Run the function - this creates and manages the trace automatically
print("🎬 Running traced workflow...")
result = my_workflow()
print(f"📊 Result: {result}")
```
Both `name` and `tags` parameters are optional. If no name is provided, the function name will be used as the trace name.
### Custom Trace Names
You can specify custom names for your traces:
```python theme={null}
@trace(name="customer-onboarding-flow")
def onboard_customer(customer_data):
"""Customer onboarding process"""
print(f"👋 Onboarding customer: {customer_data['name']}")
print("📝 Creating customer profile...")
print("📧 Sending welcome email...")
print("✅ Customer onboarding complete!")
return f"Onboarded customer: {customer_data['name']}"
@trace(name="data-processing-pipeline")
def process_data(input_data):
"""Data processing workflow"""
print(f"📊 Processing {len(input_data)} data items...")
print("🔄 Applying transformations...")
print("✅ Data processing complete!")
return f"Processed {len(input_data)} items"
# Usage examples
customer = {"name": "Alice Johnson", "email": "alice@example.com"}
result1 = onboard_customer(customer)
print(f"📋 Onboarding result: {result1}")
data_items = ["item1", "item2", "item3", "item4", "item5"]
result2 = process_data(data_items)
print(f"📋 Processing result: {result2}")
```
### Adding Tags to Traces
Tags help categorize and filter traces in your dashboard:
```python theme={null}
@trace(tags=["production", "high-priority"])
def critical_workflow():
"""Critical production workflow"""
print("🚨 Executing critical production workflow...")
print("⚡ High priority processing...")
print("✅ Critical task completed successfully!")
return "Critical task completed"
@trace(name="user-analysis", tags=["analytics", "user-behavior"])
def analyze_user_behavior(user_id):
"""Analyze user behavior patterns"""
print(f"🔍 Analyzing behavior for user: {user_id}")
print("📈 Gathering user interaction data...")
print("🧠 Running behavior analysis algorithms...")
print("✅ User behavior analysis complete!")
return f"Analysis complete for user {user_id}"
# Usage examples
print("🎬 Running critical workflow...")
result1 = critical_workflow()
print(f"📊 Critical workflow result: {result1}")
print("\n🎬 Running user analysis...")
result2 = analyze_user_behavior("user_12345")
print(f"📊 Analysis result: {result2}")
```
## Integration with Other Decorators
### Combining with Agent and Operation Decorators
The `@trace` decorator works seamlessly with other AgentOps decorators:
```python theme={null}
import agentops
from agentops.sdk.decorators import trace, agent, operation, tool
# Initialize AgentOps without auto-starting session since we use @trace
agentops.init("your-api-key", auto_start_session=False)
@agent
class DataAnalysisAgent:
def __init__(self):
print("🤖 DataAnalysisAgent initialized")
@operation
def collect_data(self, source):
print(f"📊 Collecting data from {source}...")
data = f"Data collected from {source}"
print(f"✅ Data collection complete: {data}")
return data
@tool(cost=0.05)
def analyze_data(self, data):
print(f"🧠 Analyzing data: {data}")
analysis = f"Analysis of {data}"
print(f"✅ Analysis complete: {analysis}")
return analysis
@operation
def generate_report(self, analysis):
print(f"📝 Generating report from: {analysis}")
report = f"Report: {analysis}"
print(f"✅ Report generated: {report}")
return report
@trace(name="complete-analysis-workflow")
def run_analysis_workflow(data_source):
"""Complete data analysis workflow"""
print(f"🚀 Starting analysis workflow for: {data_source}")
print("=" * 50)
agent = DataAnalysisAgent()
# Collect data
print("\n📋 Step 1: Data Collection")
data = agent.collect_data(data_source)
# Analyze data
print("\n📋 Step 2: Data Analysis")
analysis = agent.analyze_data(data)
# Generate report
print("\n📋 Step 3: Report Generation")
report = agent.generate_report(analysis)
print("\n🎉 Workflow completed successfully!")
print("=" * 50)
return {
"source": data_source,
"report": report
}
# Usage
print("🎬 Running complete analysis workflow...")
result = run_analysis_workflow("customer_database")
print(f"\n📊 Final Result:")
print(f" Source: {result['source']}")
print(f" Report: {result['report']}")
```
## Async Function Support
The `@trace` decorator fully supports async functions:
```python theme={null}
import asyncio
import agentops
from agentops.sdk.decorators import trace, operation
# Initialize AgentOps without auto-starting session since we use @trace
agentops.init("your-api-key", auto_start_session=False)
@operation
async def fetch_user_data(user_id):
"""Simulate async data fetching"""
print(f"🌐 Fetching data for user: {user_id}")
await asyncio.sleep(1) # Simulate API call
data = f"User data for {user_id}"
print(f"✅ Data fetched: {data}")
return data
@operation
async def process_user_data(user_data):
"""Simulate async data processing"""
print(f"⚙️ Processing user data: {user_data}")
await asyncio.sleep(0.5) # Simulate processing
processed = f"Processed: {user_data}"
print(f"✅ Processing complete: {processed}")
return processed
@trace(name="async-user-workflow")
async def async_user_workflow(user_id):
"""Async workflow for user processing"""
print(f"🚀 Starting async workflow for user: {user_id}")
print("=" * 45)
print("\n📋 Step 1: Fetching user data")
user_data = await fetch_user_data(user_id)
print("\n📋 Step 2: Processing user data")
processed_data = await process_user_data(user_data)
print("\n🎉 Async workflow completed!")
print("=" * 45)
return processed_data
# Usage
async def main():
print("🎬 Running async user workflow...")
result = await async_user_workflow("user_123")
print(f"\n📊 Final Result: {result}")
print("✨ Check your AgentOps dashboard to see the traced async workflow!")
# Run the async workflow
print("🔄 Starting async demo...")
asyncio.run(main())
```
## Error Handling and Trace States
### Automatic Error Handling
The `@trace` decorator automatically handles exceptions and sets appropriate trace states:
```python theme={null}
import agentops
from agentops.sdk.decorators import trace
# Initialize AgentOps without auto-starting session since we use @trace
agentops.init("your-api-key", auto_start_session=False)
@trace(name="error-prone-workflow")
def risky_operation():
"""Operation that might fail"""
import random
print("🎲 Running risky operation...")
print("⚠️ This operation has a 50% chance of failure")
if random.random() < 0.5:
print("❌ Operation failed!")
raise ValueError("Random failure occurred")
print("✅ Operation succeeded!")
return "Operation succeeded"
# The trace will automatically be marked with failure state if an exception occurs
print("🎬 Testing automatic error handling...")
for i in range(3):
print(f"\n🔄 Attempt {i+1}:")
try:
result = risky_operation()
print(f"📊 Success: {result}")
break
except ValueError as e:
print(f"📊 Operation failed: {e}")
print("🔍 Trace automatically ended with error state")
```
### Custom Error Handling
You can implement custom error handling within traced functions:
```python theme={null}
@trace(name="robust-workflow")
def robust_operation(data):
"""Operation with custom error handling"""
print(f"🚀 Starting robust operation with data: {data}")
try:
# Risky operation
if not data:
print("⚠️ No data provided!")
raise ValueError("No data provided")
# Process data
print("⚙️ Processing data...")
result = f"Processed: {data}"
print(f"✅ Processing successful: {result}")
return {"success": True, "result": result}
except ValueError as e:
# Handle specific errors
print(f"❌ Validation error: {e}")
return {"success": False, "error": str(e)}
except Exception as e:
# Handle unexpected errors
print(f"💥 Unexpected error: {e}")
return {"success": False, "error": f"Unexpected error: {str(e)}"}
# Usage examples
print("\n🎬 Testing custom error handling...")
print("\n📋 Test 1: Valid data")
result1 = robust_operation("valid_data")
print(f"📊 Result: {result1}")
print("\n📋 Test 2: Empty data")
result2 = robust_operation("")
print(f"📊 Result: {result2}")
print("\n📋 Test 3: None data")
result3 = robust_operation(None)
print(f"📊 Result: {result3}")
```
## Real-World Examples
### E-commerce Order Processing
```python theme={null}
from agentops.sdk.decorators import trace, agent, operation, tool
from openai import OpenAI
import agentops
agentops.init("your-api-key", auto_start_session=False)
@agent
class OrderProcessor:
def __init__(self):
print("🛒 OrderProcessor initialized")
@tool(cost=0.01)
def validate_payment(self, payment_info):
"""Payment validation service"""
print(f"💳 Validating payment: {payment_info['card']}")
result = {"valid": True, "transaction_id": "txn_123"}
print(f"✅ Payment validation successful: {result['transaction_id']}")
return result
@tool(cost=0.02)
def check_inventory(self, product_id, quantity):
"""Inventory check service"""
print(f"📦 Checking inventory for {product_id} (qty: {quantity})")
result = {"available": True, "reserved": quantity}
print(f"✅ Inventory check complete: {quantity} units available")
return result
@operation
def calculate_shipping(self, address, items):
"""Calculate shipping costs"""
print(f"🚚 Calculating shipping to {address['city']}, {address['state']}")
result = {"cost": 9.99, "method": "standard"}
print(f"✅ Shipping calculated: ${result['cost']} ({result['method']})")
return result
@tool(cost=0.005)
def send_confirmation_email(self, email, order_details):
"""Email service"""
print(f"📧 Sending confirmation email to {email}")
result = f"Confirmation sent to {email}"
print(f"✅ Email sent successfully")
return result
@trace(name="order-processing", tags=["ecommerce", "orders"])
def process_order(order_data):
"""Complete order processing workflow"""
print(f"🚀 Starting order processing for {order_data['customer_email']}")
print("=" * 60)
processor = OrderProcessor()
try:
# Validate payment
print("\n📋 Step 1: Payment Validation")
payment_result = processor.validate_payment(order_data["payment"])
if not payment_result["valid"]:
print("❌ Payment validation failed!")
return {"success": False, "error": "Payment validation failed"}
# Check inventory for all items
print("\n📋 Step 2: Inventory Check")
for item in order_data["items"]:
inventory_result = processor.check_inventory(
item["product_id"],
item["quantity"]
)
if not inventory_result["available"]:
print(f"❌ Item {item['product_id']} not available!")
return {"success": False, "error": f"Item {item['product_id']} not available"}
# Calculate shipping
print("\n📋 Step 3: Shipping Calculation")
shipping = processor.calculate_shipping(
order_data["shipping_address"],
order_data["items"]
)
# Send confirmation
print("\n📋 Step 4: Confirmation Email")
confirmation = processor.send_confirmation_email(
order_data["customer_email"],
{
"items": order_data["items"],
"shipping": shipping,
"payment": payment_result
}
)
print("\n🎉 Order processing completed successfully!")
print("=" * 60)
return {
"success": True,
"order_id": "ORD_12345",
"payment": payment_result,
"shipping": shipping,
"confirmation": confirmation
}
except Exception as e:
print(f"💥 Order processing failed: {e}")
return {"success": False, "error": str(e)}
# Usage
print("🎬 Running e-commerce order processing demo...")
order = {
"customer_email": "customer@example.com",
"payment": {"card": "****1234", "amount": 99.99},
"items": [{"product_id": "PROD_001", "quantity": 2}],
"shipping_address": {"city": "New York", "state": "NY"}
}
result = process_order(order)
print(f"\n📊 ORDER PROCESSING RESULT:")
print(f" Success: {result['success']}")
if result['success']:
print(f" Order ID: {result['order_id']}")
print(f" Transaction: {result['payment']['transaction_id']}")
print(f" Shipping: ${result['shipping']['cost']}")
else:
print(f" Error: {result['error']}")
```
### Data Analysis Workflow
```python theme={null}
from agentops.sdk.decorators import trace, agent, operation, tool
from openai import OpenAI
import agentops
agentops.init("your-api-key", auto_start_session=False)
@agent
class DataAnalysisAgent:
def __init__(self):
self.client = OpenAI()
print("🤖 DataAnalysisAgent initialized")
@operation
def collect_data(self, source):
"""Simulate data collection"""
print(f"📊 Collecting data from {source}...")
data = f"Raw data collected from {source}: [sample_data_1, sample_data_2, sample_data_3]"
print(f"✅ Data collection complete: {len(data)} characters collected")
return data
@operation
def analyze_data_with_llm(self, data):
"""Use LLM to analyze the collected data"""
print("🧠 Analyzing data with LLM...")
response = self.client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a data analyst. Analyze the provided data and give insights."},
{"role": "user", "content": f"Please analyze this data: {data}"}
]
)
analysis = response.choices[0].message.content
print(f"✅ LLM analysis complete: {len(analysis)} characters generated")
return analysis
@tool(cost=0.05)
def generate_visualization(self, analysis):
"""Generate data visualization"""
print("📈 Generating visualization...")
visualization = f"Chart generated for: {analysis[:50]}..."
print(f"✅ Visualization generated: {visualization}")
return visualization
@operation
def generate_report(self, analysis, visualization):
"""Generate final report using LLM"""
print("📝 Generating final report with LLM...")
response = self.client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a report writer. Create a professional data analysis report."},
{"role": "user", "content": f"Create a report based on this analysis: {analysis} and visualization: {visualization}"}
]
)
report = response.choices[0].message.content
print(f"✅ Final report generated: {len(report)} characters")
return report
@trace(name="data-analysis-workflow", tags=["analytics", "reporting"])
def run_data_analysis(data_source):
"""Complete data analysis workflow with LLM integration"""
print(f"🚀 Starting data analysis workflow for: {data_source}")
print("=" * 60)
agent = DataAnalysisAgent()
# Collect data
print("\n📋 Step 1: Data Collection")
raw_data = agent.collect_data(data_source)
# Analyze data using LLM
print("\n📋 Step 2: LLM Analysis")
analysis = agent.analyze_data_with_llm(raw_data)
# Generate visualization
print("\n📋 Step 3: Visualization Generation")
visualization = agent.generate_visualization(analysis)
# Generate final report using LLM
print("\n📋 Step 4: Report Generation")
report = agent.generate_report(analysis, visualization)
print("\n🎉 Workflow completed successfully!")
print("=" * 60)
return {
"source": data_source,
"raw_data": raw_data,
"analysis": analysis,
"visualization": visualization,
"final_report": report
}
# Usage
print("🎬 Running data analysis workflow demo...")
result = run_data_analysis("customer_database")
print(f"\n📊 ANALYSIS RESULTS:")
print(f" Data Source: {result['source']}")
print(f" Raw Data: {result['raw_data'][:80]}...")
print(f" Analysis Preview: {result['analysis'][:100]}...")
print(f" Visualization: {result['visualization']}")
print(f" Final Report Preview: {result['final_report'][:150]}...")
print(f"\n✨ Analysis complete! Check your AgentOps dashboard to see the traced workflow.")
```
## Best Practices
### 1. Use Meaningful Names
Choose descriptive names that clearly indicate what the trace represents:
```python theme={null}
# Good
@trace(name="user-authentication-flow")
def authenticate_user(credentials):
pass
@trace(name="payment-processing-pipeline")
def process_payment(payment_data):
pass
# Less descriptive
@trace(name="trace1")
def some_function():
pass
```
### 2. Add Relevant Tags
Use tags to categorize traces for easier filtering and analysis:
```python theme={null}
@trace(name="order-fulfillment", tags=["ecommerce", "fulfillment", "high-priority"])
def fulfill_order(order_id):
pass
@trace(name="data-sync", tags=["background-job", "data-processing"])
def sync_data():
pass
```
### 3. Keep Traces Focused
Each trace should represent a logical unit of work:
```python theme={null}
# Good - focused on a single workflow
@trace(name="customer-onboarding")
def onboard_customer(customer_data):
validate_customer(customer_data)
create_account(customer_data)
send_welcome_email(customer_data)
# Less focused - mixing different concerns
@trace(name="mixed-operations")
def do_everything():
onboard_customer(data1)
process_orders(data2)
generate_reports(data3)
```
### 4. Handle Errors Appropriately
Implement proper error handling within traced functions:
```python theme={null}
@trace(name="data-processing")
def process_data(data):
try:
# Main processing logic
result = complex_processing(data)
return {"success": True, "result": result}
except ValidationError as e:
# Expected errors
return {"success": False, "error": "validation_failed", "details": str(e)}
except Exception as e:
# Unexpected errors
logger.error(f"Unexpected error in data processing: {e}")
return {"success": False, "error": "processing_failed"}
```
The `@trace` decorator provides a powerful and flexible way to organize your application's telemetry data. By creating logical groupings of operations, you can better understand your application's behavior and performance characteristics in the AgentOps dashboard.
# Track Endpoint Decorator
Source: https://docs.agentops.ai/v2/usage/track-endpoint-decorator
HTTP endpoint tracing for Flask applications using the @track_endpoint decorator
## Overview
The `@track_endpoint` decorator provides HTTP endpoint tracing for Flask applications with automatic request/response monitoring. It's designed to work seamlessly with Flask and extends the functionality of the basic `@trace` decorator.
## Quick Example with OpenAI
Here's a simple Flask endpoint that generates text using OpenAI:
```python theme={null}
from flask import Flask, request
from openai import OpenAI
import agentops
# Initialize AgentOps
agentops.init(
api_key="your-api-key",
auto_start_session=False, # Required for endpoint tracing
)
app = Flask(__name__)
client = OpenAI()
@app.route("/api/generate", methods=["POST"])
@agentops.track_endpoint(
name="generate_text",
tags=["ai", "openai"]
)
def generate_text():
"""Generate text using OpenAI"""
data = request.get_json()
prompt = data.get("prompt", "Hello!")
# OpenAI call is automatically traced
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}],
max_tokens=150
)
return {
"text": response.choices[0].message.content,
"usage": {
"total_tokens": response.usage.total_tokens
}
}
if __name__ == "__main__":
app.run(debug=True)
```
The decorator automatically captures:
* HTTP request data (method, URL, headers, body)
* HTTP response data (status code, headers, body)
* OpenAI API calls and their results
* Any errors that occur during request processing
You can customize tracing with parameters like:
* `name`: Custom name for the trace
* `tags`: List or dict of tags for categorizing traces
* `capture_request`: Whether to capture request data (default: True)
* `capture_response`: Whether to capture response data (default: True)
# Tracking Agents
Source: https://docs.agentops.ai/v2/usage/tracking-agents
Associate operations with specific named agents
AgentOps automatically tracks LLM interactions in your application. For more detailed tracking, especially in multi-agent systems, you can use the `@agent` decorator to associate operations with specific agents.
## Using the Agent Decorator
For structured tracking in complex applications, you can use the `@agent` decorator to explicitly identify different agents in your system:
```python theme={null}
import agentops
from agentops.sdk.decorators import agent, operation, trace
from openai import OpenAI
# Initialize AgentOps without auto-starting session since we use @trace
agentops.init("your-api-key", auto_start_session=False)
# Create a decorated agent class
@agent(name='ResearchAgent')
class MyAgent:
def __init__(self):
self.client = OpenAI()
@operation
def search(self, query):
response = self.client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": f"Research about: {query}"}]
)
return response.choices[0].message.content
# Create a trace to group the agent operations
@trace(name="research-workflow")
def research_workflow(topic):
agent = MyAgent()
result = agent.search(topic)
return result
# Execute the function to properly register the agent span
result = research_workflow("quantum computing")
```
If you don't specify a name, the agent will use the class name by default:
```python theme={null}
@agent
class ResearchAgent:
# This agent will have the name "ResearchAgent"
pass
```
## Basic Agent Tracking (Simple Applications)
For simple applications, AgentOps will automatically track your LLM calls without additional configuration:
```python theme={null}
import agentops
from openai import OpenAI
# Initialize AgentOps
agentops.init("your-api-key")
# Create a simple agent function
def research_agent(query):
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": f"Research about: {query}"}]
)
return response.choices[0].message.content
# Use your agent - all LLM calls will be tracked automatically
result = research_agent("quantum computing")
```
## Multi-Agent Systems
For complex multi-agent systems, you can organize multiple agents within a single trace:
```python theme={null}
import agentops
from agentops.sdk.decorators import agent, operation, tool, trace
# Initialize AgentOps without auto-starting session since we use @trace
agentops.init("your-api-key", auto_start_session=False)
@agent
class DataCollectionAgent:
@tool(cost=0.02)
def fetch_data(self, source):
return f"Data from {source}"
@agent
class AnalysisAgent:
@operation
def analyze_data(self, data):
return f"Analysis of {data}"
@agent
class ReportingAgent:
@tool(cost=0.01)
def generate_report(self, analysis):
return f"Report: {analysis}"
@trace(name="multi-agent-workflow")
def collaborative_workflow(data_source):
"""Workflow using multiple specialized agents"""
# Data collection
collector = DataCollectionAgent()
raw_data = collector.fetch_data(data_source)
# Analysis
analyzer = AnalysisAgent()
analysis = analyzer.analyze_data(raw_data)
# Reporting
reporter = ReportingAgent()
report = reporter.generate_report(analysis)
return {
"source": data_source,
"analysis": analysis,
"report": report
}
# Run the collaborative workflow
result = collaborative_workflow("customer_database")
```
## Agent Communication and Coordination
You can track complex agent interactions and communication patterns:
```python theme={null}
import agentops
from agentops.sdk.decorators import agent, operation, tool, trace
# Initialize AgentOps without auto-starting session since we use @trace
agentops.init("your-api-key", auto_start_session=False)
@agent
class CoordinatorAgent:
def __init__(self):
self.task_queue = []
@operation
def assign_task(self, task, agent_type):
self.task_queue.append({"task": task, "agent": agent_type})
return f"Task assigned to {agent_type}: {task}"
@operation
def collect_results(self, results):
return f"Collected {len(results)} results"
@agent
class WorkerAgent:
def __init__(self, agent_id):
self.agent_id = agent_id
@tool(cost=0.05)
def process_task(self, task):
return f"Agent {self.agent_id} processed: {task}"
@trace(name="coordinated-processing")
def coordinated_processing_workflow(tasks):
"""Workflow with agent coordination"""
coordinator = CoordinatorAgent()
workers = [WorkerAgent(f"worker_{i}") for i in range(3)]
# Assign tasks
assignments = []
for i, task in enumerate(tasks):
worker_type = f"worker_{i % len(workers)}"
assignment = coordinator.assign_task(task, worker_type)
assignments.append(assignment)
# Process tasks
results = []
for i, task in enumerate(tasks):
worker = workers[i % len(workers)]
result = worker.process_task(task)
results.append(result)
# Collect results
summary = coordinator.collect_results(results)
return {
"assignments": assignments,
"results": results,
"summary": summary
}
# Run coordinated workflow
tasks = ["analyze_data", "generate_report", "send_notification"]
result = coordinated_processing_workflow(tasks)
```
## Dashboard Visualization
All operations are automatically associated with the agent that originated them. Agents are given a name which is what you will see in the dashboard.
## Best Practices
1. **Start Simple**: For most applications, just using `agentops.init()` is sufficient.
2. **Use Decorators When Needed**: Add the `@agent` decorator when you need to clearly distinguish between multiple agents in your system.
3. **Meaningful Names**: Choose descriptive names for your agents to make them easier to identify in the dashboard.
4. **Organize with Traces**: Use the `@trace` decorator to group related agent operations into logical workflows.
5. **Track Costs**: Use the `@tool` decorator with cost parameters to track the expenses associated with agent operations.
6. **Agent Specialization**: Create specialized agents for different types of tasks to improve observability and maintainability.
## Migration from Session Decorator
If you're migrating from the legacy `@session` decorator, replace it with the `@trace` decorator:
```python theme={null}
# New approach (recommended)
from agentops.sdk.decorators import trace, agent
@trace(name="my-workflow")
def my_workflow():
# workflow code
pass
# Old approach (deprecated)
from agentops.sdk.decorators import session, agent
@session
def my_workflow():
# workflow code
pass
```
The `@trace` decorator provides the same functionality as the legacy `@session` decorator but with more flexibility and better integration with the new trace management features.
# Tracking LLM Calls
Source: https://docs.agentops.ai/v2/usage/tracking-llm-calls
Tracking LLM Calls using the AgentOps SDK
## Automatic LLM Call Tracking
AgentOps makes tracking LLM calls incredibly simple. Just initialize the SDK with your API key, and AgentOps will automatically track all your LLM calls:
```python theme={null}
import agentops
from openai import OpenAI
# Initialize AgentOps
agentops.init("your-api-key")
# Make LLM calls as usual - they'll be tracked automatically
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello, world!"}]
)
```
### How it works
When the AgentOps SDK detects a supported LLM provider module installed, it will automatically
start tracking its usage. No further work is required from you! 😊
### Supported LLM Providers
AgentOps supports automatic tracking for many popular LLM providers, including:
* OpenAI
* Anthropic
* Google (Gemini)
* LiteLLM
* And more
### Not working?
Try these steps:
1. Make sure you have the latest version of the AgentOps SDK installed. We are constantly updating it to support new LLM libraries and releases.
2. Make sure you are calling `agentops.init()` *after* importing the LLM module but *before* you are calling the LLM method.
3. Make sure the `instrument_llm_calls` parameter of `agentops.init()` is set to `True` (default).
Still not working? Please let us know! You can find us on [Discord](https://discord.gg/DR2abmETjZ),
[GitHub](https://github.com/AgentOps-AI/agentops),
or email us at [engineering@agentops.ai](mailto:engineering@agentops.ai).
To get started, just follow the quick start guide.
Get started with AgentOps in under 5 minutes
# TypeScript SDK
Source: https://docs.agentops.ai/v2/usage/typescript-sdk
Get started with the AgentOps TypeScript SDK for Node.js applications
# TypeScript SDK
AgentOps provides TypeScript/JavaScript support through two SDK options:
## Modern TypeScript SDK (Recommended)
The modern TypeScript SDK is built on OpenTelemetry standards and provides comprehensive instrumentation for AI agents.
### Installation
```bash theme={null}
npm install agentops
```
### Quick Start
```typescript theme={null}
import { agentops } from 'agentops';
// Initialize with environment variable AGENTOPS_API_KEY
await agentops.init();
// Or pass API key explicitly
await agentops.init({
apiKey: 'your-api-key'
});
// Your AI agent code here - instrumentation happens automatically!
```
### Features
* 🔌 **Plugin Architecture**: Dynamic loading and configuration of instrumentors
* 🤖 **GenAI Support**: Built-in support for OpenTelemetry GenAI semantic conventions
* 📊 **Standards Compliant**: Exports to any OpenTelemetry-compatible collector
* 🛠️ **Framework Agnostic**: Instrument multiple agent frameworks simultaneously
* 🔧 **TypeScript First**: Full TypeScript support with comprehensive type definitions
### OpenAI Agents Integration
The SDK provides first-class support for the [OpenAI Agents SDK](https://github.com/openai/openai-agents-js/):
```typescript theme={null}
import { agentops } from 'agentops';
import { Agent, run } from '@openai/agents';
// Initialize AgentOps first
await agentops.init();
// Create your agent with tools and instructions
const agent = new Agent({
name: 'My Assistant',
instructions: 'You are a helpful assistant.',
tools: [/* your tools */],
});
// Run the agent - instrumentation happens automatically
const result = await run(agent, "Hello, how can you help me?");
console.log(result.finalOutput);
```
Automatically captures:
* **Agent Lifecycle**: Track agent creation, execution, and completion
* **LLM Generation**: Capture model requests, responses, and token usage
* **Function Calls**: Monitor tool usage and function execution
* **Audio Processing**: Observe speech-to-text and text-to-speech operations
* **Handoffs**: Track agent-to-agent communication and workflow transitions
### Debug Logging
Enable detailed instrumentation logs:
```bash theme={null}
DEBUG=agentops:* node your-app.js
```
## Legacy TypeScript SDK (Alpha)
The legacy TypeScript SDK has limited functionality compared to the Python SDK. The modern TypeScript SDK above is recommended for new projects.
### Installation
```bash theme={null}
npm install agentops
```
### Usage
```typescript theme={null}
import OpenAI from "openai";
import { Client } from 'agentops';
const openai = new OpenAI();
const agentops = new Client({
apiKey: "your-agentops-api-key",
tags: ["typescript", "example"],
patchApi: [openai] // Automatically record OpenAI calls
});
// Sample OpenAI call (automatically recorded)
async function chat() {
const completion = await openai.chat.completions.create({
messages: [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Hello!" }
],
model: "gpt-3.5-turbo",
});
return completion;
}
// Track custom functions
function customFunction(x: string) {
console.log(x);
return 5;
}
const wrappedFunction = agentops.wrap(customFunction);
wrappedFunction("hello");
// Run your agent
chat().then(() => {
agentops.endSession("Success");
});
```
## Getting Help
* [Discord Community](https://discord.gg/FagdcwwXRR)
* [GitHub Issues](https://github.com/AgentOps-AI/agentops/issues)
* [Documentation](https://docs.agentops.ai)