By default, AgentOps is compatible with agents using Langchain with our LLM Instrumentor as long as they’re using supported models.

As an alternative to instrumenting, the Langchain Callback Handler is available.

Constructor

  • api_key (Optional, string): API Key for AgentOps services. If not provided, the key will be read from the AGENTOPS_API_KEY environment variable.
  • endpoint (Optional, string): The endpoint for the AgentOps service. Defaults to ’https://api.agentops.ai‘.
  • max_wait_time (Optional, int): The maximum time to wait in milliseconds before flushing the queue. Defaults to 30,000 (30 seconds).
  • max_queue_size (Optional, int): The maximum size of the event queue. Defaults to 100.
  • tags (Optional, List[string]): Tags for the sessions for grouping or sorting (e.g., [“GPT-4”]).

Usage

Install Dependencies

pip install agentops[langchain]

Disable Instrumentation

The Handler and our automatic instrumentation both accomplish the same tasks. To use the Handler, first disable instrumentation.

When calling .init(), pass in the proper parameter. agentops.init(instrument_llm_calls=False)

If you are building an Agent framework or other SDK and adding support for AgentOps, you cannot guarantee that your end user will properly include instrument_llm_calls=False. In this case, call agentops.stop_instrumenting()

Implement Callback Handler

Initialize the handler with its constructor and pass it into the callbacks array from Langchain.

from agentops.langchain_callback_handler import LangchainCallbackHandler
ChatOpenAI(callbacks=[LangchainCallbackHandler()])

Example:

from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from agentops import LangchainCallbackHandler

prompt = ChatPromptTemplate.from_messages(["Tell me a joke about {animal}"])

model = ChatOpenAI(callbacks=[LangchainCallbackHandler()])

chain = prompt | model
response = chain.invoke({"animal": "bears"})

Why use the handler?

If your project uses Langchain for Agents, Events and Tools, it may be easier to use the callback Handler for observability.

If your project uses models with Langchain that are not yet supported by AgentOps, they can be supported by the Handler.

Was this page helpful?