AgentBasis integrates with LangChain via callbacks, providing full visibility into your chains’ execution steps.
Setup
import agentbasis
from agentbasis.frameworks.langchain import get_callback_handler
# Initialize AgentBasis first
agentbasis.init(api_key="your-api-key", agent_id="your-agent-id")
# Get a callback handler
handler = get_callback_handler()
Unlike OpenAI/Anthropic instrumentation which patches globally, LangChain requires explicitly passing the callback handler to your components.
Basic Usage
Pass the handler to your LangChain calls via the config parameter:
from langchain_openai import ChatOpenAI
handler = get_callback_handler()
llm = ChatOpenAI(model="gpt-4")
response = llm.invoke(
"Hello world",
config={"callbacks": [handler]}
)
Using get_callback_config
For convenience, use get_callback_config() to get a pre-configured dict:
from agentbasis.frameworks.langchain import get_callback_config
config = get_callback_config()
response = llm.invoke("Hello world", config=config)
Using instrument (Singleton)
Use instrument() to get a global singleton handler:
from agentbasis.frameworks.langchain import instrument
# Returns the same handler instance every time
handler = instrument()
# Use throughout your application
llm.invoke("Hello", config={"callbacks": [handler]})
Chains
Trace entire chain executions with parent-child relationships:
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from agentbasis.frameworks.langchain import get_callback_config
chain = LLMChain(
llm=llm,
prompt=PromptTemplate.from_template("Answer this: {query}")
)
result = chain.invoke(
{"query": "What is the capital of France?"},
config=get_callback_config()
)
Tool invocations are automatically traced:
from langchain.agents import create_openai_functions_agent, AgentExecutor
from langchain.tools import tool
from agentbasis.frameworks.langchain import get_callback_handler
@tool
def search(query: str) -> str:
"""Search for information."""
return f"Results for: {query}"
handler = get_callback_handler()
agent_executor = AgentExecutor(
agent=agent,
tools=[search],
callbacks=[handler] # Can also pass to constructor
)
result = agent_executor.invoke(
{"input": "Search for Python tutorials"}
)
Retrievers
RAG retriever operations are traced:
from langchain_community.vectorstores import FAISS
from langchain_openai import OpenAIEmbeddings
from agentbasis.frameworks.langchain import get_callback_config
vectorstore = FAISS.from_texts(["Document 1", "Document 2"], OpenAIEmbeddings())
retriever = vectorstore.as_retriever()
docs = retriever.invoke(
"search query",
config=get_callback_config()
)
Trace Structure
The callback handler creates nested spans showing the full execution tree:
langchain.chain.RetrievalQA
├── langchain.retriever.VectorStoreRetriever
│ └── query: "What is machine learning?"
│ └── documents: [...]
└── langchain.llm.ChatOpenAI
└── model: "gpt-4"
└── prompt: [...]
└── completion: "..."
Captured Data
The integration traces:
| Component | Captured Data |
|---|
| LLM | Model, prompts, completions, tokens, latency |
| Chain | Chain type, inputs, outputs, duration |
| Tool | Tool name, input, output, duration |
| Retriever | Query, retrieved documents, duration |
With User Context
Combine with AgentBasis context for per-user tracing:
import agentbasis
from agentbasis.frameworks.langchain import get_callback_config
# Set user context
agentbasis.set_user("user-123")
agentbasis.set_session("session-456")
# All traces will include user context
result = chain.invoke(
{"query": "Hello"},
config=get_callback_config()
)