Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.agentops.ai/llms.txt

Use this file to discover all available pages before exploring further.

AgentOps integrates with Llama Stack via its python client to provide observability into applications that leverage it. Llama Stack has comprehensive documentation available as well as a great quickstart guide. You can use this guide to setup the Llama Stack server and client or alternatively use our Docker compose file.

Adding AgentOps to Llama Stack applications

1

Install the AgentOps SDK

pip install agentops
2

Install the Llama Stack Client

pip install llama-stack-client
3

Add 3 lines of code

Make sure to call agentops.init before calling any openai, cohere, crew, etc models.
import agentops
agentops.init(<INSERT YOUR API KEY HERE>)
Set your API key as an .env variable for easy access.
AGENTOPS_API_KEY=<YOUR API KEY>
Read more about environment variables in Advanced Configuration
4

Run your 🦙🥞 application

Execute your program and visit app.agentops.ai/drilldown to observe your waterfall! 🕵️
After your run, AgentOps prints a clickable url to console linking directly to your session in the Dashboard

Examples

An example notebook is available here to showcase how to use the Llama Stack client with AgentOps.