LangChain is a framework for developing applications powered by language models. AgentOps automatically tracks your LangChain agents by integrating its callback handler.

Installation

Install AgentOps and the necessary LangChain dependencies:

pip install agentops langchain langchain-community langchain-openai python-dotenv

Setting Up API Keys

You’ll need API keys for AgentOps and OpenAI (as ChatOpenAI is commonly used with LangChain):

Set these as environment variables or in a .env file.

export OPENAI_API_KEY="your_openai_api_key_here"
export AGENTOPS_API_KEY="your_agentops_api_key_here"

Then load them in your Python code:

from dotenv import load_dotenv
import os

load_dotenv()

AGENTOPS_API_KEY = os.getenv("AGENTOPS_API_KEY")
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")

Usage

Integrating AgentOps with LangChain involves using the LangchainCallbackHandler.

You don’t need a separate agentops.init() call; the LangchainCallbackHandler initializes the AgentOps client automatically if an API key is provided to it or found in the environment.

Here’s a basic example:

from langchain_community.chat_models import ChatOpenAI
from langchain.agents import initialize_agent, AgentType, Tool # Corrected Tool import
from langchain.tools import DuckDuckGoSearchRun # Example tool
from agentops.integration.callbacks.langchain import LangchainCallbackHandler


# 1. Initialize LangchainCallbackHandler
# AGENTOPS_API_KEY can be passed here or loaded from environment
handler = LangchainCallbackHandler(api_key=AGENTOPS_API_KEY, tags=['LangChain Example'])

# 2. Define tools for the agent
search_tool = DuckDuckGoSearchRun()
tools = [
    Tool( # Wrap DuckDuckGoSearchRun in a Tool object
        name="DuckDuckGo Search",
        func=search_tool.run,
        description="Useful for when you need to answer questions about current events or the current state of the world."
    )
]

# 3. Configure LLM with the AgentOps handler
# OPENAI_API_KEY can be passed here or loaded from environment
llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY,
                 callbacks=[handler],
                 model='gpt-3.5-turbo',
                 temperature=0) # Added temperature for reproducibility

# 4. Initialize your agent, passing the handler to callbacks
agent = initialize_agent(
    tools,
    llm,
    agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True,
    callbacks=[handler], 
    handle_parsing_errors=True
)

# 5. Run your agent
try:
    response = agent.run("Who is the current CEO of OpenAI and what is his most recent public statement?")
    print(response)
except Exception as e:
    print(f"An error occurred: {e}")

Visit the AgentOps Dashboard to see your session.

Examples