AgentOps integrates with Llama Stack via its python client to provide observability into applications that leverage it.

Llama Stack has comprehensive documentation available as well as a great quickstart guide. You can use this guide to setup the Llama Stack server and client or alternatively use our Docker compose file.

Adding AgentOps to Llama Stack applications

1

Install the AgentOps SDK

2

Install the Llama Stack Client

3

Add 3 lines of code

Make sure to call agentops.init before calling any openai, cohere, crew, etc models.

Set your API key as an .env variable for easy access.

Read more about environment variables in Advanced Configuration

4

Run your 🦙🥞 application

Execute your program and visit app.agentops.ai/drilldown to observe your waterfall! 🕵️

After your run, AgentOps prints a clickable url to console linking directly to your session in the Dashboard

Examples

An example notebook is available here to showcase how to use the Llama Stack client with AgentOps.