OpenAI Agents SDK
AgentOps and OpenAI Agents SDK integration for powerful multi-agent workflow monitoring.
OpenAI Agents SDK is a lightweight yet powerful framework for building multi-agent workflows. The SDK provides a comprehensive set of tools for creating, managing, and monitoring agent-based applications.
Core Concepts
- Agents: LLMs configured with instructions, tools, guardrails, and handoffs
- Handoffs: Allow agents to transfer control to other agents for specific tasks
- Guardrails: Configurable safety checks for input and output validation
- Tracing: Built-in tracking of agent runs, allowing you to view, debug and optimize your workflows
Install the AgentOps SDK
Install OpenAI Agents SDK
This will be updated to a PyPI link when the package is officially released.
Add 2 lines of code
Make sure to call agentops.init
before calling any openai
, cohere
, crew
, etc models.
Set your API key as an .env
variable for easy access.
Read more about environment variables in Advanced Configuration
Run your agents
Execute your program and visit app.agentops.ai/drilldown to observe your Agents! 🕵️
After your run, AgentOps prints a clickable url to console linking directly to your session in the Dashboard
Clickable link to session
Hello World Example
Handoffs Example
Functions Example
The Agent Loop
When you call Runner.run()
, the SDK runs a loop until it gets a final output:
- The LLM is called using the model and settings on the agent, along with the message history.
- The LLM returns a response, which may include tool calls.
- If the response has a final output, the loop ends and returns it.
- If the response has a handoff, the agent is set to the new agent and the loop continues from step 1.
- Tool calls are processed (if any) and tool response messages are appended. Then the loop continues from step 1.
You can use the max_turns
parameter to limit the number of loop executions.
Final Output
Final output is the last thing the agent produces in the loop:
- If you set an
output_type
on the agent, the final output is when the LLM returns something of that type using structured outputs. - If there’s no
output_type
(i.e., plain text responses), then the first LLM response without any tool calls or handoffs is considered the final output.