The SDK allows you to monitor Anthropic’s Claude models with zero code changes to your logic.
Setup
Enable instrumentation with a single function call. This automatically tracks all subsequent calls to both Anthropic and AsyncAnthropic clients.
import agentbasis
from agentbasis.llms.anthropic import instrument
# Initialize AgentBasis first
agentbasis.init(api_key="your-api-key", agent_id="your-agent-id")
# Enable Anthropic instrumentation (covers sync and async)
instrument()
A single instrument() call instruments both synchronous and asynchronous clients. You don’t need to call it twice.
Usage
Once instrumented, use the Anthropic client as you normally would. All messages.create calls are automatically traced.
Synchronous
from anthropic import Anthropic
client = Anthropic()
response = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello there"}]
)
print(response.content[0].text)
Asynchronous
from anthropic import AsyncAnthropic
import asyncio
async def main():
client = AsyncAnthropic()
response = await client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello there"}]
)
print(response.content[0].text)
asyncio.run(main())
Streaming
Streaming responses are supported for both sync and async. The trace is recorded once the stream completes.
Sync Streaming
with client.messages.stream(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[{"role": "user", "content": "Tell me a story"}]
) as stream:
for text in stream.text_stream:
print(text, end="")
Async Streaming
async def stream_response():
client = AsyncAnthropic()
async with client.messages.stream(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[{"role": "user", "content": "Tell me a story"}]
) as stream:
async for text in stream.text_stream:
print(text, end="")
Captured Data
The integration automatically records:
| Field | Description |
|---|
gen_ai.system | anthropic |
gen_ai.request.model | Model ID (e.g., claude-3-5-sonnet-20241022) |
gen_ai.prompt | System and user messages |
gen_ai.completion | Response text |
gen_ai.usage.input_tokens | Input token count |
gen_ai.usage.output_tokens | Output token count |
duration | Request latency |