Give us a star to bookmark on GitHub, save for later 🖇️)
Core Concepts
- Agents: LLMs configured with instructions, tools, guardrails, and handoffs
- Handoffs: Allow agents to transfer control to other agents for specific tasks
- Guardrails: Configurable safety checks for input and output validation
- Tracing: Built-in tracking of agent runs, allowing you to view, debug and optimize your workflows
1
Install the AgentOps SDK
2
Install OpenAI Agents SDK
This will be updated to a PyPI link when the package is officially released.
3
Add 2 lines of code
Make sure to call
agentops.init before calling any openai, cohere, crew, etc models.Set your API key as an
.env variable for easy access.4
Run your agents
Execute your program and visit app.agentops.ai/drilldown to observe your Agents! 🕵️
After your run, AgentOps prints a clickable url to console linking directly to your session in the Dashboard

Clickable link to session
Hello World Example
Handoffs Example
Functions Example
The Agent Loop
When you callRunner.run(), the SDK runs a loop until it gets a final output:
- The LLM is called using the model and settings on the agent, along with the message history.
- The LLM returns a response, which may include tool calls.
- If the response has a final output, the loop ends and returns it.
- If the response has a handoff, the agent is set to the new agent and the loop continues from step 1.
- Tool calls are processed (if any) and tool response messages are appended. Then the loop continues from step 1.
max_turns parameter to limit the number of loop executions.
Final Output
Final output is the last thing the agent produces in the loop:- If you set an
output_typeon the agent, the final output is when the LLM returns something of that type using structured outputs. - If there’s no
output_type(i.e., plain text responses), then the first LLM response without any tool calls or handoffs is considered the final output.

