The SDK provides seamless instrumentation for the OpenAI Python client.
Setup
Enable instrumentation with a single function call. This automatically tracks all subsequent calls to both OpenAI and AsyncOpenAI clients.
import agentbasis
from agentbasis.llms.openai import instrument
# Initialize AgentBasis first
agentbasis.init(api_key="your-api-key", agent_id="your-agent-id")
# Enable OpenAI instrumentation (covers sync and async)
instrument()
A single instrument() call instruments both synchronous and asynchronous clients. You don’t need to call it twice.
Usage
Once instrumented, use the OpenAI client as you normally would. The SDK automatically captures all call data.
Synchronous
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello world"}]
)
print(response.choices[0].message.content)
Asynchronous
from openai import AsyncOpenAI
import asyncio
async def main():
client = AsyncOpenAI()
response = await client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello world"}]
)
print(response.choices[0].message.content)
asyncio.run(main())
Streaming
Streaming responses are supported for both sync and async. The trace is recorded once the stream completes.
Sync Streaming
stream = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True,
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")
Async Streaming
async def stream_response():
client = AsyncOpenAI()
stream = await client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True,
)
async for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")
Captured Data
The integration automatically records:
| Field | Description |
|---|
gen_ai.system | openai |
gen_ai.request.model | Model name (e.g., gpt-4) |
gen_ai.prompt | Input messages |
gen_ai.completion | Response content |
gen_ai.usage.prompt_tokens | Prompt token count |
gen_ai.usage.completion_tokens | Completion token count |
duration | Request latency |