Quickstart
Start tracing your AI agents in 5 minutes with zero code changes.
Get Risicare running in your project in under 5 minutes. This guide covers the fastest path to observability.
Prerequisites
Python:
- Python 3.10 or higher
- An AI agent using OpenAI, Anthropic, or another supported provider
JavaScript/TypeScript:
- Node.js 18.0.0 or higher
- An AI agent using any of the 12 supported providers (OpenAI, Anthropic, Google, Mistral, Groq, Cohere, Together, Ollama, HuggingFace, Cerebras, Bedrock, Vercel AI)
Get an API key (takes 1 minute):
- Sign up at app.risicare.ai
- A default project and API key are created automatically
- Go to Settings → API Keys to copy your key (starts with
rsk-)
Your API key will look like: rsk-a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4
Multiple projects?
Need separate projects for different agents or teams? Click the project dropdown in the top nav → "+ New Project". Each project gets its own API key automatically. Use different keys to keep data isolated.
Installation
Tier 0: Zero Code Changes
The fastest way to start is with environment variables only. No code changes required.
export RISICARE_API_KEY=rsk-your-api-key
export RISICARE_TRACING=true
python your_agent.pyThat's it! All LLM calls are now automatically traced.
How it works
The Python SDK uses import hooks to automatically instrument any OpenAI, Anthropic, or supported LLM library you import. No risicare.init() call needed—just install the SDK and set the environment variables. Traces start flowing when your agent imports a provider library.
JavaScript requires explicit initialization
Zero-code Tier 0 is Python-only. The JavaScript SDK requires init() + patchOpenAI() — see Tier 1 below.
What You'll See
After running your agent, open the Risicare Dashboard. Traces appear in seconds — typically within 10-30 seconds of the first LLM call.
Each trace shows:
- Complete execution flow — Every LLM call, tool use, and decision point
- Token usage & cost — Automatic conversion to USD per model (OpenAI, Anthropic, etc.)
- Prompts & completions — Full request/response content for debugging
- Timing breakdown — Latency per span, total trace duration
- Error detection — Failed calls are automatically flagged and classified

Minimum data needed
You need at least one LLM call in your agent for a trace to appear. If you don't see traces after 30 seconds, check that:
- Your API key is set correctly:
echo $RISICARE_API_KEY - Your agent actually calls an LLM (OpenAI, Anthropic, etc.)
- The SDK is installed:
pip show risicare
Tier 1: Explicit Configuration
For more control—and to group related LLM calls into a single trace—use risicare.init() with @trace:
Without @trace, each LLM call is a separate trace
If you skip @trace, every LLM call appears as an independent trace in the dashboard.
Use @trace (decorator) or with trace("name"): (context manager) to group related calls.
Error Handling
Errors inside traced functions are automatically captured with full exception details:
import risicare
from risicare import trace
risicare.init(api_key="rsk-your-api-key")
from openai import OpenAI
client = OpenAI()
@trace
def risky_operation(query: str):
try:
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": query}],
)
return response.choices[0].message.content
except Exception as e:
# Re-raise so Risicare captures the error in the trace
raise
finally:
risicare.shutdown() # Flush spans even on failureReporting caught exceptions
Unhandled exceptions are captured automatically. For exceptions you catch without re-raising, use report_error() to feed them to the self-healing pipeline:
from risicare import report_error
try:
result = tool.execute()
except ToolError as e:
report_error(e) # Triggers diagnosis → fix generation
result = fallback() # Handle gracefullySee Diagnosis Overview for details.
Tier 2: Agent Identity
Add agent identity to your code:
View Your Traces
Open the Risicare Dashboard to see:
- Traces: Complete execution flows with timing
- Spans: Individual LLM calls with prompts/completions
- Costs: Token usage and cost breakdown by model
- Errors: Automatic error detection and classification
Next Steps
Progressive Integration
Learn about Tiers 3-5 for sessions, phases, and multi-agent support
SDK Configuration
Full configuration options and environment variables
Error Taxonomy
Understand how errors are classified automatically
Self-Healing
Enable automatic fix generation and deployment