Skip to main content
GitHub

Providers

Auto-instrumentation for LLM providers.

Risicare automatically instruments popular LLM providers with zero code changes.

Supported Providers

Python SDK

JavaScript SDK

How It Works

When you call risicare.init(), the SDK automatically patches supported provider libraries:

import risicare
from openai import OpenAI
 
risicare.init()
 
client = OpenAI()
 
# This call is automatically traced
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}]
)

Captured Data

For each LLM call, Risicare captures:

FieldDescription
modelModel name (gpt-4o, claude-3-sonnet, etc.)
providerProvider name (openai, anthropic, etc.)
prompt_tokensInput token count
completion_tokensOutput token count
total_tokensTotal token count
cost_usdEstimated cost in USD
latency_msRequest duration
temperatureSampling temperature
max_tokensMaximum output tokens
promptInput prompt (if trace_content=True)
completionOutput completion (if trace_content=True)

Manual Instrumentation

If auto-instrumentation doesn't meet your needs:

from risicare import trace
 
@trace(name="custom-llm-call", kind="llm_call")
def call_custom_llm(prompt: str) -> str:
    # Your custom LLM call
    return response

You can also set provider and model attributes manually:

from risicare import get_tracer
 
tracer = get_tracer()
 
with tracer.start_span("custom-llm", kind="llm_call") as span:
    span.set_attribute("llm.provider", "custom")
    span.set_attribute("llm.model", "my-model")
    response = call_llm(prompt)

Next Steps

Select a provider to see detailed integration guides: