Skip to main content
GitHub

Spans

Individual operations within a trace.

Spans represent individual operations within a trace, forming a tree structure.

Span Kinds

Every span has a kind field indicating its role. These are the 17 SpanKind values:

Standard Kinds

KindDescriptionAuto-Created
internalDefault, internal operation-
serverServer-side of RPC-
clientClient-side of RPC-
producerMessage producer-
consumerMessage consumer-

Agent-Specific Kinds

KindDescriptionAuto-Created
agentAgent lifecycle span@agent decorator
llm_callLLM API callYes (auto-instrumented)
tool_callTool/function executionYes (auto-instrumented)
retrievalRAG retrieval operationYes (framework integration)
decisionAgent decision point (generic)Manual
delegationDelegation to another agent@trace_delegate decorator
coordinationMulti-agent coordination@trace_coordinate decorator
messageInter-agent message@trace_message decorator

Phase-Specific Kinds

KindDescriptionAuto-Created
thinkReasoning/planning phase@trace_think decorator
decideDecision-making phase@trace_decide decorator
observeEnvironment observation phase@trace_observe decorator
reflectSelf-evaluation phaseManual

Span Attributes

Common Attributes

AttributeTypeDescription
span_idstringUnique 16-char hex ID
trace_idstringParent trace ID (32-char hex)
parent_span_idstringParent span ID
namestringSpan name
kindstringKind from the tables above
start_timetimestampStart time
end_timetimestampEnd time
duration_msnumberDuration in milliseconds
statusstringunset, ok, or error

LLM Span Attributes

AttributeDescription
gen_ai.systemProvider (openai, anthropic, google, etc.)
gen_ai.request.modelRequested model name
gen_ai.response.modelModel name returned by API
gen_ai.response.idResponse ID
gen_ai.usage.prompt_tokensInput tokens
gen_ai.usage.completion_tokensOutput tokens
gen_ai.usage.total_tokensTotal tokens
gen_ai.request.temperatureSampling temperature
gen_ai.request.max_tokensMax output tokens
gen_ai.request.streamWhether streaming was used
gen_ai.request.has_toolsWhether tools were provided
gen_ai.completion.tool_callsNumber of tool calls
gen_ai.completion.finish_reasonStop reason
gen_ai.latency_msRequest latency in milliseconds

Tool Span Attributes

AttributeDescription
tool.nameTool name
tool.inputTool input (JSON)
tool.outputTool output (JSON)
tool.errorError message if failed

Agent Span Attributes

AttributeDescription
agent.nameAgent name
agent.roleAgent role
agent.idUnique agent instance ID
agent.parent_idParent agent ID
agent.iterationLoop iteration number

Span Hierarchy

Spans form a tree rooted at the trace:

# This code creates:
# Trace
# └── Agent Span (orchestrator) [kind: agent]
#     ├── Think Span [kind: think]
#     │   └── LLM Span [kind: llm_call]
#     ├── Decide Span [kind: decide]
#     │   └── LLM Span [kind: llm_call]
#     └── Act Span [kind: tool_call]
#         └── Tool Span [kind: tool_call]
 
@agent(name="orchestrator")
def run():
    @trace_think
    def think():
        return llm.generate("Analyze...")
 
    @trace_decide
    def decide():
        return llm.generate("Plan...")
 
    @trace_act
    def act():
        return tool.execute()
 
    think()
    decide()
    act()

Error Spans

When an operation fails:

{
    "span_id": "abc123def4567890",
    "kind": "llm_call",
    "status": "error",
    "error": {
        "type": "ToolExecutionError",
        "message": "API timeout after 30s",
        "code": "TOOL.EXECUTION.TIMEOUT",
        "stack_trace": "..."
    }
}

Custom Spans

Create custom spans for any operation:

from risicare import get_tracer
 
def process_data():
    tracer = get_tracer()
    with tracer.start_span(
        "data_processing",
        attributes={"batch_size": 100},
    ) as span:
        result = do_processing()
        span.set_attribute("records", len(result))
        return result

Span Events

Add events within a span:

from risicare import get_tracer
 
tracer = get_tracer()
with tracer.start_span("pipeline") as span:
    span.add_event("stage_1_complete", {"records": 100})
    process_stage_1()
 
    span.add_event("stage_2_complete", {"records": 95})
    process_stage_2()

Performance Analysis

Critical Path

The dashboard highlights the critical path -- the longest sequential chain of spans that determines total latency.

Parallel Execution

Spans at the same level with overlapping times indicate parallel execution.

Bottlenecks

Identify slow spans:

  • Sort by duration
  • Filter by kind
  • Compare across traces

Next Steps