Skip to main content
The SDK provides seamless instrumentation for the Google Gemini Python client.

Setup

Enable instrumentation with a single function call. This will automatically track all subsequent calls to GenerativeModel.
import agentbasis
from agentbasis.llms.gemini import instrument

# Initialize AgentBasis first
agentbasis.init(api_key="your-api-key", agent_id="your-agent-id")

# Enable Gemini instrumentation
instrument()

Usage

Once instrumented, use the Gemini client as you normally would. The SDK automatically captures:
  • Model name
  • Input prompts
  • Generated content
  • Token usage
  • Latency

Synchronous

import google.generativeai as genai

genai.configure(api_key="your-gemini-api-key")

model = genai.GenerativeModel("gemini-pro")
response = model.generate_content("Explain quantum computing in simple terms")

print(response.text)

Asynchronous

import google.generativeai as genai
import asyncio

genai.configure(api_key="your-gemini-api-key")

async def main():
    model = genai.GenerativeModel("gemini-pro")
    response = await model.generate_content_async("Explain quantum computing")
    print(response.text)

asyncio.run(main())

Streaming

Streaming responses are also supported and will be fully tracked once the stream completes.
model = genai.GenerativeModel("gemini-pro")

response = model.generate_content("Tell me a story", stream=True)

for chunk in response:
    print(chunk.text, end="")

Captured Data

The integration automatically records:
FieldDescription
gen_ai.systemgemini
gen_ai.request.modelModel name (e.g., gemini-pro)
gen_ai.promptInput prompt text
gen_ai.completionGenerated response
gen_ai.usage.input_tokensPrompt token count
gen_ai.usage.output_tokensCompletion token count
durationRequest latency