Skip to main content
GitHub

Traces

End-to-end visibility into agent execution.

A trace represents a complete execution from request to response.

Trace Structure

Trace (trace_id: abc123)
├── Root Span (agent: orchestrator)
│   ├── LLM Span (model: gpt-4o, 234ms)
│   ├── Tool Span (tool: search, 1200ms)
│   └── Agent Span (agent: researcher)
│       ├── LLM Span (model: gpt-4o-mini, 156ms)
│       └── Tool Span (tool: fetch, 89ms)
└── Metadata
    ├── session_id
    ├── user_id
    ├── environment
    └── tags

Trace Attributes

AttributeTypeDescription
trace_idstringUnique 32-char hex identifier
start_timetimestampTrace start time
end_timetimestampTrace end time
duration_msnumberTotal duration in milliseconds
statusstringok or error
span_countnumberTotal spans in trace
total_tokensnumberSum of all LLM tokens
total_cost_usdnumberSum of all LLM costs
root_agentstringTop-level agent name
error_codestringError taxonomy code (if error)

Viewing Traces

Trace List

The trace list shows all traces with key metrics:

Trace list view with agent names, durations, tokens, costs, and status indicators

The KPI strip at the top shows: Total Traces, Error Rate, P50 Latency (with P90), P95 Latency (with P99), Avg Duration, and LLM Calls (with token count).

ColumnDescription
TimeWhen the trace occurred
Trace IDClick to view details
AgentRoot agent name
DurationTotal latency
TokensTotal token usage
CostTotal USD cost
StatusSuccess or error badge

Filtering

Filter traces by:

# Time range
last 1 hour
last 24 hours
custom range

# Status
status:ok
status:error

# Agent
agent:planner
agent:researcher

# Performance
latency:>5000
cost:>0.10

# Content
contains:"specific text"

Trace Detail View

Waterfall

Click any trace to see the full execution waterfall with nested spans, timing, and LLM details:

The waterfall shows:

  • Span hierarchy with parent-child nesting
  • Parallel execution paths
  • Timing relationships and critical path
  • Per-span details (model, tokens, cost) in the detail panel

Span List

Table view with:

  • Span name and type
  • Duration
  • Tokens (for LLM spans)
  • Status
  • Expandable details

Content View

For each LLM span:

  • System prompt
  • User messages
  • Assistant response
  • Tool calls and results

Content Privacy

Content capture can be disabled with trace_content=False for sensitive applications.

Trace Comparison

Compare two traces side-by-side:

  1. Select traces from the list
  2. Click "Compare"
  3. View diff of:
    • Span structure
    • Timing differences
    • Content changes
    • Cost comparison

Exporting Traces

JSON Export

# Via API
curl -X GET "https://app.risicare.ai/api/v1/traces/{trace_id}" \
  -H "Authorization: Bearer rsk-..."

Dashboard Export

  1. Select traces
  2. Click "Export"
  3. Choose format (JSON, CSV)

Trace Sampling

For high-volume applications:

risicare.init(
    sample_rate=0.1  # Capture 10% of traces
)

Sampling is deterministic by trace_id for consistency.

Next Steps