View Notebook on Github

Multiple Concurrent Traces

This example demonstrates how to run multiple traces (sessions) concurrently using both the modern trace-based API and the legacy session API for backwards compatibility.

First let’s install the required packages:

pip install -U openai
pip install -U agentops
pip install -U python-dotenv

Then import them:

from openai import OpenAI
import agentops
import os
from dotenv import load_dotenv

Next, we’ll set our API keys. There are several ways to do this, the code below is just the most foolproof way for the purposes of this example. It accounts for both users who use environment variables and those who just want to set the API Key here.

Get an AgentOps API key

  1. Create an environment variable in a .env file or other method. By default, the AgentOps init() function will look for an environment variable named AGENTOPS_API_KEY. Or…

  2. Replace <your_agentops_key> below and pass in the optional api_key parameter to the AgentOps init(api_key=...) function. Remember not to commit your API key to a public repo!

load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") or "<your_openai_key>"
AGENTOPS_API_KEY = os.getenv("AGENTOPS_API_KEY") or "<your_agentops_key>"

Initialize AgentOps. We’ll disable auto-start to manually create our traces:

agentops.init(AGENTOPS_API_KEY, auto_start_session=False)
client = OpenAI()

Modern Trace-Based Approach

The recommended approach uses start_trace() and end_trace():

# Create multiple concurrent traces
trace_1 = agentops.start_trace("user_query_1", tags=["experiment_a"])
trace_2 = agentops.start_trace("user_query_2", tags=["experiment_b"])

print(f"Trace 1 ID: {trace_1.span.get_span_context().trace_id}")
print(f"Trace 2 ID: {trace_2.span.get_span_context().trace_id}")

LLM Calls with Automatic Tracking

With the modern implementation, LLM calls are automatically tracked without needing special session assignment:

# LLM calls are automatically tracked and associated with the current context
messages_1 = [{"role": "user", "content": "Hello from trace 1"}]
response_1 = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=messages_1,
    temperature=0.5,
)

messages_2 = [{"role": "user", "content": "Hello from trace 2"}]
response_2 = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=messages_2,
    temperature=0.5,
)

Using Context Managers

You can also use traces as context managers for automatic cleanup:

with agentops.start_trace("context_managed_trace") as trace:
    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": "Hello from context manager"}],
        temperature=0.5,
    )
    # Trace automatically ends when exiting the context

Using Decorators

For even cleaner code, use decorators:

@agentops.trace
def process_user_query(query: str):
    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": query}],
        temperature=0.5,
    )
    return response.choices[0].message.content

# Each function call creates its own trace
result_1 = process_user_query("What is the weather like?")
result_2 = process_user_query("Tell me a joke")

Legacy Session API (Backwards Compatibility)

For backwards compatibility, the legacy session API is still available:

# Legacy approach - still works but not recommended for new code
session_1 = agentops.start_session(tags=["legacy-session-1"])
session_2 = agentops.start_session(tags=["legacy-session-2"])

# Legacy sessions work the same way as before
session_1.end_session(end_state="Success")
session_2.end_session(end_state="Success")

Ending Traces

End traces individually or all at once:

# End specific traces
agentops.end_trace(trace_1, "Success")
agentops.end_trace(trace_2, "Success")

# Or end all active traces at once
# agentops.end_trace(end_state="Success")

Key Differences from Legacy Multi-Session Mode

  1. No mode switching: You can create multiple traces without entering a special “multi-session mode”
  2. Automatic LLM tracking: LLM calls are automatically associated with the current execution context
  3. No exceptions: No MultiSessionException or similar restrictions
  4. Cleaner API: Use decorators and context managers for better code organization
  5. Backwards compatibility: Legacy session functions still work for existing code

If you look in the AgentOps dashboard, you will see multiple unique traces, each with their respective LLM calls and events properly tracked.