LangChain
Auto-instrument LangChain chains and agents.
Risicare automatically instruments LangChain for comprehensive chain and agent observability.
Installation
pip install risicare[langchain]
# or
pip install risicare langchain langchain-openaiVersion Compatibility
Requires langchain-core >= 0.2.0.
Auto-Instrumentation
import risicare
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
risicare.init()
llm = ChatOpenAI(model="gpt-4o")
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("user", "{input}")
])
chain = prompt | llm
# Automatically traced
response = chain.invoke({"input": "Hello!"})What's Captured
| Feature | Description |
|---|---|
| Chain Execution | Full chain invoke/ainvoke calls |
| LLM Calls | All LLM provider calls (deduplicated) |
| Prompt Templates | Template formatting |
| Output Parsers | Parsing operations |
| Tool Calls | Tool/function executions |
| Retriever Calls | RAG retrieval operations |
| Memory Operations | Conversation memory access |
Span Hierarchy
langchain.chain/{class}
├── langchain.chat/{model}
├── langchain.tool/{name}
├── langchain.retriever/{name}
└── langchain.agent_action/{tool}
Provider Deduplication
Provider Deduplication
When using LangChain, underlying LLM provider spans are automatically suppressed to avoid duplicate traces. You don't need to disable provider instrumentation manually.
Agents
LangChain agents are fully traced:
from langchain.agents import create_react_agent, AgentExecutor
from langchain_core.tools import tool
@tool
def search(query: str) -> str:
"""Search for information."""
return f"Results for: {query}"
agent = create_react_agent(llm, [search], prompt)
executor = AgentExecutor(agent=agent, tools=[search])
# Full agent execution is traced
result = executor.invoke({"input": "Search for AI news"})Agent spans include:
- Agent iterations
- Tool selections
- Tool executions
- Final answer generation
LCEL Chains
LangChain Expression Language (LCEL) chains are automatically traced:
from langchain_core.output_parsers import StrOutputParser
chain = prompt | llm | StrOutputParser()
# Each component is a span
result = chain.invoke({"input": "Hello"})Streaming
async for chunk in chain.astream({"input": "Write a story"}):
print(chunk, end="")RAG Chains
Retrieval chains capture document retrieval:
from langchain_core.runnables import RunnablePassthrough
rag_chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
# Retriever calls are captured with documents
result = rag_chain.invoke("What is Risicare?")