View Notebook on Github
LangGraph Integration with AgentOps
This example demonstrates how to use LangGraph with AgentOps for comprehensive observability of your graph-based agent workflows.
LangGraph is a framework for building stateful, multi-step applications with LLMs. AgentOps automatically instruments LangGraph to track:
- Graph compilation and structure
- Node executions and transitions
- Tool usage within the graph
- LLM calls made by agents
- Complete execution flow with timing
Installation
pip install agentops langchain-openai langgraph python-dotenv
Setup
First, let’s import the necessary libraries and initialize AgentOps:
import os
from typing import Annotated, Literal, TypedDict
from langgraph.graph import StateGraph, END
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, AIMessage, ToolMessage
from langchain_core.tools import tool
import agentops
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Initialize AgentOps - this enables automatic instrumentation
agentops.init(os.getenv("AGENTOPS_API_KEY"), auto_start_session=False)
trace = agentops.start_trace("langgraph_example")
Let’s create some simple tools that our agent can use:
@tool
def get_weather(location: str) -> str:
"""Get the weather for a given location."""
# Simulated weather data
weather_data = {
"New York": "Sunny, 72°F",
"London": "Cloudy, 60°F",
"Tokyo": "Rainy, 65°F",
"Paris": "Partly cloudy, 68°F",
"Sydney": "Clear, 75°F"
}
return weather_data.get(location, f"Weather data not available for {location}")
@tool
def calculate(expression: str) -> str:
"""Evaluate a mathematical expression."""
try:
result = eval(expression)
return f"The result is: {result}"
except Exception as e:
return f"Error calculating expression: {str(e)}"
# Collect tools for binding to the model
tools = [get_weather, calculate]
Define Agent State
In LangGraph, we need to define the state that will be passed between nodes:
class AgentState(TypedDict):
messages: Annotated[list, add_messages]
Create the Model and Node Functions
We’ll create a model with tool binding and define the functions that will be our graph nodes:
# Create model with tool binding
model = ChatOpenAI(temperature=0, model="gpt-4o-mini").bind_tools(tools)
def should_continue(state: AgentState) -> Literal["tools", "end"]:
"""Determine if we should continue to tools or end."""
messages = state["messages"]
last_message = messages[-1]
# If the LLM wants to use tools, continue to the tools node
if hasattr(last_message, "tool_calls") and last_message.tool_calls:
return "tools"
# Otherwise, we're done
return "end"
def call_model(state: AgentState):
"""Call the language model."""
messages = state["messages"]
response = model.invoke(messages)
return {"messages": [response]}
def call_tools(state: AgentState):
"""Execute the tool calls requested by the model."""
messages = state["messages"]
last_message = messages[-1]
tool_messages = []
for tool_call in last_message.tool_calls:
tool_name = tool_call["name"]
tool_args = tool_call["args"]
# Find and execute the requested tool
for tool in tools:
if tool.name == tool_name:
result = tool.invoke(tool_args)
tool_messages.append(
ToolMessage(
content=str(result),
tool_call_id=tool_call["id"]
)
)
break
return {"messages": tool_messages}
Build the Graph
Now let’s construct the LangGraph workflow:
# Create the graph
workflow = StateGraph(AgentState)
# Add nodes
workflow.add_node("agent", call_model)
workflow.add_node("tools", call_tools)
# Set the entry point
workflow.set_entry_point("agent")
# Add conditional edges
workflow.add_conditional_edges(
"agent",
should_continue,
{
"tools": "tools",
"end": END
}
)
# Add edge from tools back to agent
workflow.add_edge("tools", "agent")
# Compile the graph
app = workflow.compile()
Run Examples
Let’s test our agent with different queries that require tool usage:
# Example 1: Weather query
print("Example 1: Weather Query")
print("=" * 50)
messages = [HumanMessage(content="What's the weather in New York and Tokyo?")]
result = app.invoke({"messages": messages})
final_message = result["messages"][-1]
print(f"Response: {final_message.content}\n")
# Example 2: Math calculation
print("Example 2: Math Calculation")
print("=" * 50)
messages = [HumanMessage(content="Calculate 25 * 4 + 10")]
result = app.invoke({"messages": messages})
final_message = result["messages"][-1]
print(f"Response: {final_message.content}\n")
# Example 3: Combined query
print("Example 3: Combined Query")
print("=" * 50)
messages = [HumanMessage(content="What's the weather in Paris? Also calculate 100/5")]
result = app.invoke({"messages": messages})
final_message = result["messages"][-1]
print(f"Response: {final_message.content}\n")
View in AgentOps Dashboard
After running this notebook, you can view the traces in your AgentOps dashboard. You’ll see:
- Graph Compilation: The structure of your LangGraph with nodes and edges
- Execution Flow: How the graph executed, including:
- Agent node calls
- Tool node executions
- State transitions
- LLM Calls: Each ChatGPT call with prompts and completions
- Tool Usage: Which tools were called and their results
- Timing Information: How long each step took
The instrumentation captures the full context of your LangGraph application automatically!
print("✅ Check your AgentOps dashboard for comprehensive traces!")
print("🔍 You'll see the graph structure, execution flow, and all LLM/tool calls.")
agentops.end_trace(trace)