Providers
Auto-instrumentation for LLM providers.
Risicare automatically instruments popular LLM providers with zero code changes.
Supported Providers
Python SDK
OpenAI
GPT-4o, GPT-4, embeddings
Learn more
Anthropic
Claude 3.5, Claude 3
Learn more
Gemini Pro, PaLM
Learn more
Cohere
Command, embeddings
Learn more
Mistral
Mistral Large, Medium
Learn more
Groq
Ultra-fast inference
Learn more
Together AI
Open-source models
Learn more
Ollama
Local inference
Learn more
Amazon Bedrock
AWS multi-model
Learn more
Vertex AI
Google Cloud AI
Learn more
Cerebras
Hardware-accelerated
Learn more
HuggingFace
Inference API
Learn more
OpenAI-Compatible
8+ providers via base_url
Learn more
JavaScript SDK
OpenAI (JS)
Node.js OpenAI client
Learn more
Anthropic (JS)
Node.js Anthropic client
Learn more
Vercel AI (JS)
Vercel AI SDK
Learn more
How It Works
When you call risicare.init(), the SDK automatically patches supported provider libraries:
import risicare
from openai import OpenAI
risicare.init()
client = OpenAI()
# This call is automatically traced
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)Captured Data
For each LLM call, Risicare captures:
| Field | Description |
|---|---|
model | Model name (gpt-4o, claude-3-sonnet, etc.) |
provider | Provider name (openai, anthropic, etc.) |
prompt_tokens | Input token count |
completion_tokens | Output token count |
total_tokens | Total token count |
cost_usd | Estimated cost in USD |
latency_ms | Request duration |
temperature | Sampling temperature |
max_tokens | Maximum output tokens |
prompt | Input prompt (if trace_content=True) |
completion | Output completion (if trace_content=True) |
Manual Instrumentation
If auto-instrumentation doesn't meet your needs:
from risicare import trace
@trace(name="custom-llm-call", kind="llm_call")
def call_custom_llm(prompt: str) -> str:
# Your custom LLM call
return responseYou can also set provider and model attributes manually:
from risicare import get_tracer
tracer = get_tracer()
with tracer.start_span("custom-llm", kind="llm_call") as span:
span.set_attribute("llm.provider", "custom")
span.set_attribute("llm.model", "my-model")
response = call_llm(prompt)Next Steps
Select a provider to see detailed integration guides: