Monitor and analyze your Google Gemini API calls with AgentOps
AgentOps provides seamless integration with Google’s Generative AI API, allowing you to monitor and analyze all your Gemini model interactions automatically.
Initialize AgentOps at the beginning of your application to automatically track all Gemini API calls:
import agentopsfrom google import genai# Initialize AgentOpsagentops.init(<INSERT YOUR API KEY HERE>)# Create a client for Gemini Developer APIclient = genai.Client(api_key="YOUR_GEMINI_API_KEY")# Generate content with a modelresponse = client.models.generate_content( model='gemini-2.0-flash-001', contents='What is AI observability?')print(response.text)# All Gemini API calls are automatically tracked by AgentOps
You can also set up the client using environment variables or with Vertex AI:
import agentopsfrom google import genai# Initialize AgentOpsagentops.init(<INSERT YOUR API KEY HERE>)# Set GOOGLE_API_KEY environment variable before running# export GOOGLE_API_KEY='your-api-key'client = genai.Client()# Generate contentresponse = client.models.generate_content( model='gemini-2.0-flash-001', contents='What is AI observability?')print(response.text)
You can customize the model behavior with system instructions and other settings:
import agentopsfrom google import genaifrom google.genai import types# Initialize AgentOpsagentops.init(<INSERT YOUR API KEY HERE>)# Create a clientclient = genai.Client(api_key="YOUR_GEMINI_API_KEY")# Generate content with system instructionsresponse = client.models.generate_content( model='gemini-2.0-flash-001', contents='Write a short poem', config=types.GenerateContentConfig( system_instruction='You are a professional poet who specializes in sonnets', max_output_tokens=200, temperature=0.7,),)print(response.text)
AgentOps also supports tracking streaming requests with Gemini:
import agentopsfrom google import genai# Initialize AgentOpsagentops.init(<INSERT YOUR API KEY HERE>)# Create a clientclient = genai.Client(api_key="YOUR_GEMINI_API_KEY")# Generate streaming contentfor chunk in client.models.generate_content( model='gemini-2.0-flash-001', contents='Explain quantum computing in simple terms.', stream=True):print(chunk.text, end="", flush=True)# All streaming requests are automatically tracked by AgentOps
import agentopsfrom google import genai# Initialize AgentOpsagentops.init(<INSERT YOUR API KEY HERE>)# Create a clientclient = genai.Client(api_key="YOUR_GEMINI_API_KEY")# Start a chat sessionchat = client.chats.create(model='gemini-2.0-flash-001')# Send messages and get responsesresponse = chat.send_message('Hello, how can you help me with AI development?')print(response.text)# Continue the conversationresponse = chat.send_message('What are the best practices for prompt engineering?')print(response.text)# All chat interactions are automatically tracked by AgentOps
Set your API key as an .env variable for easy access.
AGENTOPS_API_KEY=<YOUR API KEY>GOOGLE_API_KEY=<YOUR GEMINI API KEY># For Vertex AI# GOOGLE_GENAI_USE_VERTEXAI=true# GOOGLE_CLOUD_PROJECT=your-project-id# GOOGLE_CLOUD_LOCATION=us-central1