Integrations
AutoGen
Getting Started
Integrations
Usage
Other Info
Integrations
AutoGen
Track and analyze your AutoGen agents with AgentOps
Installation
pip install agentops pyautogen
Usage
Initialize AgentOps at the beginning of your application to automatically track all AutoGen agent interactions:
import agentops
import autogen
# Initialize AgentOps
agentops.init(<INSERT YOUR API KEY HERE>)
# Configure your AutoGen agents
config_list = [
{
"model": "gpt-4",
"api_key": "<YOUR_OPENAI_API_KEY>"
}
]
llm_config = {
"config_list": config_list,
"timeout": 60,
}
# Create AutoGen agents
assistant = autogen.AssistantAgent(
name="assistant",
llm_config=llm_config,
system_message="You are a helpful AI assistant."
)
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="TERMINATE",
max_consecutive_auto_reply=10,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config={"last_n_messages": 3, "work_dir": "coding"},
)
# Initiate a conversation
user_proxy.initiate_chat(
assistant,
message="How can I implement a basic web scraper in Python?"
)
# All agent interactions are automatically tracked by AgentOps
Multi-Agent Conversation Example
AgentOps tracks interactions across multiple AutoGen agents:
import agentops
import autogen
# Initialize AgentOps
agentops.init(<INSERT YOUR API KEY HERE>)
# Configure LLM
config_list = [
{
"model": "gpt-4",
"api_key": "<YOUR_OPENAI_API_KEY>"
}
]
llm_config = {
"config_list": config_list,
"timeout": A 60,
}
# Create a team of agents
researcher = autogen.AssistantAgent(
name="researcher",
llm_config=llm_config,
system_message="You are a researcher who specializes in finding accurate information."
)
coder = autogen.AssistantAgent(
name="coder",
llm_config=llm_config,
system_message="You are an expert programmer who writes clean, efficient code."
)
critic = autogen.AssistantAgent(
name="critic",
llm_config=llm_config,
system_message="You review solutions and provide constructive feedback."
)
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="TERMINATE",
max_consecutive_auto_reply=10,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config={"last_n_messages": 3, "work_dir": "coding"},
)
# Create a group chat
groupchat = autogen.GroupChat(
agents=[user_proxy, researcher, coder, critic],
messages=[],
max_round=12
)
manager = autogen.GroupChatManager(
groupchat=groupchat,
llm_config=llm_config
)
# Initiate the group chat
user_proxy.initiate_chat(
manager,
message="Create a Python program to analyze sentiment from Twitter data."
)
# All agent interactions across the group chat are automatically tracked by AgentOps
Environment Variables
Set your API key as an .env
variable for easy access.
AGENTOPS_API_KEY=<YOUR API KEY>
OPENAI_API_KEY=<YOUR OPENAI API KEY>
Read more about environment variables in Advanced Configuration