Providers
Auto-instrumentation for LLM providers.
Risicare automatically instruments popular LLM providers with zero code changes.
Supported Providers
Python SDK
OpenAI
GPT-4o, GPT-4, embeddings
Anthropic
Claude 3.5, Claude 3
Gemini Pro, PaLM
Cohere
Command, embeddings
Mistral
Mistral Large, Medium
Groq
Ultra-fast inference
Together AI
Open-source models
Ollama
Local inference
Amazon Bedrock
AWS multi-model
Vertex AI
Google Cloud AI
Cerebras
Hardware-accelerated
HuggingFace
Inference API
OpenAI-Compatible
8+ providers via base_url
JavaScript SDK
The JS SDK supports the same 12 native providers via explicit patchX() calls, plus 8 host-detected providers via patchOpenAI():
OpenAI (JS)
Node.js OpenAI client
Anthropic (JS)
Node.js Anthropic client
Vercel AI (JS)
Vercel AI SDK
Additional JS providers (Google, Mistral, Groq, Cohere, Together, Ollama, HuggingFace, Cerebras, Bedrock) follow the same pattern — import from risicare/<provider> and call patchX(client). See the JS SDK reference for the full list.
How It Works
When you call risicare.init(), the SDK automatically patches supported provider libraries:
import risicare
from openai import OpenAI
risicare.init()
client = OpenAI()
# This call is automatically traced
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)Captured Data
For each LLM call, Risicare captures:
| Field | Description |
|---|---|
model | Model name (gpt-4o, claude-3-sonnet, etc.) |
provider | Provider name (openai, anthropic, etc.) |
prompt_tokens | Input token count |
completion_tokens | Output token count |
total_tokens | Total token count |
cost_usd | Estimated cost in USD |
latency_ms | Request duration |
temperature | Sampling temperature |
max_tokens | Maximum output tokens |
prompt | Input prompt (if trace_content=True) |
completion | Output completion (if trace_content=True) |
Manual Instrumentation
If auto-instrumentation doesn't meet your needs:
from risicare import trace
@trace(name="custom-llm-call", kind="llm_call")
def call_custom_llm(prompt: str) -> str:
# Your custom LLM call
return responseYou can also set provider and model attributes manually:
from risicare import get_tracer
tracer = get_tracer()
with tracer.start_span("custom-llm", kind="llm_call") as span:
span.set_attribute("llm.provider", "custom")
span.set_attribute("llm.model", "my-model")
response = call_llm(prompt)Next Steps
Select a provider to see detailed integration guides: