LangChain Example
Using the LangChain Callback Handler
View Notebook on Github
AgentOps Langchain Agent Implementation
Using AgentOps monitoring with Langchain is simple. We’ve created a LangchainCallbackHandler that will do all of the heavy lifting!
First let’s install the required packages
Then import them
The only difference with using AgentOps is that we’ll also import this special Callback Handler
Next, we’ll set our API keys. There are several ways to do this, the code below is just the most foolproof way for the purposes of this notebook. It accounts for both users who use environment variables and those who just want to set the API Key here in this notebook.
-
Create an environment variable in a .env file or other method. By default, the AgentOps
init()
function will look for an environment variable namedAGENTOPS_API_KEY
. Or… -
Replace
<your_agentops_key>
below and pass in the optionalapi_key
parameter to the AgentOpsinit(api_key=...)
function. Remember not to commit your API key to a public repo!
This is where AgentOps comes into play. Before creating our LLM instance via Langchain, first we’ll create an instance of the AO LangchainCallbackHandler. After the handler is initialized, a session will be recorded automatically.
Pass in your API key, and optionally any tags to describe this session for easier lookup in the AO dashboard.
You can also retrieve the session_id
of the newly created session.
Agents generally use tools. Let’s define a simple tool here. Tool usage is also recorded.
For each tool, you need to also add the callback handler
Add the tools to our LLM
Finally, let’s create our agent! Pass in the callback handler to the agent, and all the actions will be recorded in the AO Dashboard
Check your session
Finally, check your run on AgentOps
Now if we look in the AgentOps dashboard, you will see a session recorded with the LLM calls and tool usage.
Langchain v0 Example
This langchain version is out of date and support is being deprecated. You can find the example notebook here.