Skip to main content
GitHub

Quickstart

Start tracing your AI agents in 5 minutes with zero code changes.

Get Risicare running in your project in under 5 minutes. This guide covers the fastest path to observability.

Prerequisites

Python:

  • Python 3.10 or higher
  • An AI agent using OpenAI, Anthropic, or another supported provider

JavaScript/TypeScript:

  • Node.js 18.0.0 or higher
  • An AI agent using OpenAI, Anthropic, or Vercel AI SDK

Get an API key (takes 1 minute):

  1. Sign up at app.risicare.ai
  2. A default project and API key are created automatically
  3. Go to Settings → API Keys to copy your key (starts with rsk-)

Your API key will look like: rsk-a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4

Multiple projects?

Need separate projects for different agents or teams? Click the project dropdown in the top nav → "+ New Project". Each project gets its own API key automatically. Use different keys to keep data isolated.

Installation

Tier 0: Zero Code Changes

The fastest way to start is with environment variables only. No code changes required.

That's it! All LLM calls are now automatically traced.

How it works

The SDK uses Python import hooks (or ES Proxies for JavaScript) to automatically instrument any OpenAI, Anthropic, or supported LLM library you import. No risicare.init() call needed—just install the SDK and set the environment variables. Traces start flowing when your agent imports a provider library.

What You'll See

After running your agent, open the Risicare Dashboard. Traces appear in seconds — typically within 10-30 seconds of the first LLM call.

Each trace shows:

  • Complete execution flow — Every LLM call, tool use, and decision point
  • Token usage & cost — Automatic conversion to USD per model (OpenAI, Anthropic, etc.)
  • Prompts & completions — Full request/response content for debugging
  • Timing breakdown — Latency per span, total trace duration
  • Error detection — Failed calls are automatically flagged and classified

Minimum data needed

You need at least one LLM call in your agent for a trace to appear. If you don't see traces after 30 seconds, check that:

  • Your API key is set correctly: echo $RISICARE_API_KEY
  • Your agent actually calls an LLM (OpenAI, Anthropic, etc.)
  • The SDK is installed: pip show risicare

Tier 1: Explicit Configuration

Tier 0 uses environment variables only—the SDK auto-initializes. For more control (custom service name, sampling rate, environment labels), use risicare.init():

Tier 2: Agent Identity

Add agent identity to your code:

View Your Traces

Open the Risicare Dashboard to see:

  • Traces: Complete execution flows with timing
  • Spans: Individual LLM calls with prompts/completions
  • Costs: Token usage and cost breakdown by model
  • Errors: Automatic error detection and classification

Next Steps