AgentOps integrates with Llama Stack via its python client to provide observability into applications that leverage it.

Llama Stack has comprehensive documentation available as well as a great quickstart guide. You can use this guide to setup the Llama Stack server and client or alternatively use our Docker compose file.

Adding AgentOps to Llama Stack applications

1

Install the AgentOps SDK

pip install agentops
2

Install the Llama Stack Client

pip install llama-stack-client
3

Add 3 lines of code

Make sure to call agentops.init before calling any openai, cohere, crew, etc models.

import agentops
agentops.init(<INSERT YOUR API KEY HERE>)

Set your API key as an .env variable for easy access.

AGENTOPS_API_KEY=<YOUR API KEY>

Read more about environment variables in Advanced Configuration

4

Run your šŸ¦™šŸ„ž application

Execute your program and visit app.agentops.ai/drilldown to observe your waterfall! šŸ•µļø

After your run, AgentOps prints a clickable url to console linking directly to your session in the Dashboard

Examples

An example notebook is available here to showcase how to use the Llama Stack client with AgentOps.