Skip to main content
GitHub

OpenTelemetry

Bridge Risicare with OpenTelemetry for unified observability.

Risicare supports full OpenTelemetry interoperability for unified observability.

OTel Bridge

Export Risicare spans to OpenTelemetry:

import risicare
 
risicare.init(
    api_key="rsk-...",
    otel_bridge=True,  # Enable OTel bridge
)

With the bridge enabled, all Risicare spans are also exported to the configured OTel collector.

OTLP Exporter

Export directly to OTLP-compatible backends:

risicare.init(
    api_key="rsk-...",
    otel_bridge=True,
    otlp_endpoint="http://localhost:4317",
    otlp_headers={"Authorization": "Bearer token"},
)

OTLP Ingestion

Risicare can receive spans from existing OTel instrumentation:

POST /v1/otlp/v1/traces
Content-Type: application/x-protobuf
Authorization: Bearer rsk-...

<OTLP trace data>

This allows migrating existing OTel instrumentation to Risicare without code changes.

Programmatic Configuration

from risicare import OTLPExporter
 
# Create a custom exporter
exporter = OTLPExporter(
    endpoint="http://localhost:4317",
    headers={"Authorization": "Bearer token"},
    compression="gzip",
)
 
risicare.init(
    api_key="rsk-...",
    project_id="proj-...",
    exporters=[exporter],
)

Multiple Exporters

Export to multiple backends:

from risicare import HttpExporter, OTLPExporter, ConsoleExporter
 
risicare.init(
    api_key="rsk-...",
    project_id="proj-...",
    exporters=[
        HttpExporter(),           # Risicare cloud
        OTLPExporter(             # Jaeger
            endpoint="http://jaeger:4317"
        ),
        ConsoleExporter(),        # Debug output
    ],
)

Semantic Conventions

Risicare follows OpenTelemetry semantic conventions for GenAI:

AttributeDescription
gen_ai.systemLLM provider (openai, anthropic, etc.)
gen_ai.request.modelRequested model name
gen_ai.response.modelActual model used
gen_ai.usage.prompt_tokensInput token count
gen_ai.usage.completion_tokensOutput token count
gen_ai.request.temperatureTemperature setting
gen_ai.request.max_tokensMax tokens setting

Context Propagation

W3C Trace Context is fully supported:

from risicare import inject_trace_context, extract_trace_context
 
# Inject into outgoing request
headers = {}
inject_trace_context(headers)
# headers now contains 'traceparent' and 'tracestate'
 
# Extract from incoming request
context = extract_trace_context(request.headers)

Existing OTel SDK

If you already use the OpenTelemetry Python SDK:

import risicare
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from risicare.integrations.otel import RisicareSpanProcessor
 
# Initialize Risicare first (required for the processor to work)
risicare.init(
    api_key="rsk-...",
    project_id="proj-...",
)
 
# Add Risicare as a span processor (no constructor params needed)
provider = TracerProvider()
provider.add_span_processor(RisicareSpanProcessor())
trace.set_tracer_provider(provider)
 
# Existing OTel instrumentation now exports to Risicare

Next Steps