AutoGen
Auto-instrumentation for AutoGen conversational agents.
Risicare provides deep integration with AutoGen for conversational agent observability.
Version Compatibility
autogen-agentchat >= 0.4.0 (v0.4) or pyautogen >= 0.2.0 (v0.2).Installation
pip install risicare[autogen]
# or
pip install risicare autogen-agentchatBasic Usage
v0.2 (pyautogen)
import risicare
from autogen import AssistantAgent, UserProxyAgent
risicare.init()
# Define agents as usual - they're automatically traced
assistant = AssistantAgent(
name="assistant",
llm_config={"model": "gpt-4o"}
)
user_proxy = UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER"
)
# Start conversation - fully traced
user_proxy.initiate_chat(
assistant,
message="Write a Python function to calculate fibonacci"
)v0.4 (autogen-agentchat)
import risicare
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_ext.models.openai import OpenAIChatCompletionClient
risicare.init()
model_client = OpenAIChatCompletionClient(model="gpt-4o")
assistant = AssistantAgent(
name="assistant",
model_client=model_client,
)
# v0.4 uses an async, task-based API
team = RoundRobinGroupChat([assistant], max_turns=1)
result = await team.run(task="Write a Python function to calculate fibonacci")What's Captured
Agent Details
| Field | Description |
|---|---|
agent.name | Agent name |
agent.type | AssistantAgent, UserProxy, etc. |
agent.system_message | System prompt |
agent.llm_config | LLM configuration |
Conversation Flow
| Field | Description |
|---|---|
message.sender | Sending agent |
message.receiver | Receiving agent |
message.content | Message content |
message.role | user/assistant/function |
Code Execution
Limited Capture
Code execution appears in the span timeline as part of the conversation flow, but code-specific attributes (language, code content, output, exit code) are not captured by auto-instrumentation.
Group Chat
Multi-agent group chats are fully traced:
from autogen import GroupChat, GroupChatManager
agents = [user_proxy, coder, reviewer]
groupchat = GroupChat(
agents=agents,
messages=[],
max_round=10
)
manager = GroupChatManager(groupchat=groupchat)
# All agent interactions traced
user_proxy.initiate_chat(
manager,
message="Build a web scraper"
)Risicare captures:
- Speaker selection logic
- Turn-taking sequence
- Agent responses
- Termination conditions
Function Calling
Function/tool calls are traced.
v0.2 Syntax
tools parameter on AssistantAgent.def get_weather(location: str) -> str:
return f"Weather in {location}: Sunny"
assistant = AssistantAgent(
name="assistant",
llm_config={
"model": "gpt-4o",
"functions": [{
"name": "get_weather",
"parameters": {...}
}]
}
)
assistant.register_function(
function_map={"get_weather": get_weather}
)Code Execution
Code execution appears in the span timeline when agents run code:
from autogen.coding import LocalCommandLineCodeExecutor
executor = LocalCommandLineCodeExecutor(work_dir="coding")
user_proxy = UserProxyAgent(
name="user_proxy",
code_execution_config={"executor": executor}
)
# Code execution is visible in the span timeline,
# but code-specific attributes (content, language,
# output, errors) are not captured.Nested Chats
Nested conversations maintain trace hierarchy:
assistant.register_nested_chats(
[{"recipient": specialist, "message": "Help with this"}],
trigger=lambda msg: "complex" in msg
)
# Nested chats traced as child spansHuman-in-the-Loop
Human input points are captured:
user_proxy = UserProxyAgent(
name="user_proxy",
human_input_mode="ALWAYS" # Or "TERMINATE"
)
# Human input events traced with:
# - Input requested
# - Wait duration
# - Human responseTermination
Conversation termination is tracked:
assistant = AssistantAgent(
name="assistant",
is_termination_msg=lambda x: "TERMINATE" in x.get("content", "")
)
# Termination traced with:
# - Termination condition
# - Final message
# - Total turnsProvider Spans
AutoGen instrumentation creates agent/framework-level spans. Underlying LLM calls (e.g., OpenAI, Anthropic) are traced separately by provider instrumentation, giving you both framework-level and LLM-level visibility.
Visualization
View AutoGen execution in the dashboard:
- Conversation View: Message timeline between agents
- Agent Cards: Individual agent statistics
- Code Blocks: Executed code with outputs
- Turn Analysis: Speaking patterns and flow