Skip to main content
GitHub

Anthropic

Auto-instrumentation for Anthropic API.

Risicare automatically instruments the Anthropic Python SDK.

Installation

pip install risicare anthropic

Basic Usage

import risicare
from anthropic import Anthropic
 
risicare.init()
 
client = Anthropic()
 
# Automatically traced
response = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "What is the capital of France?"}
    ]
)

Supported Methods

MethodTraced
messages.createYes (sync + async)

Streaming is supported via the stream=True parameter on messages.create.

Streaming

Streaming responses are fully supported via the stream=True parameter:

stream = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Write a poem"}],
    stream=True
)
 
for event in stream:
    if event.type == "content_block_delta":
        print(event.delta.text, end="")

Note: The client.messages.stream() context manager is not instrumented. Always use messages.create(stream=True) to ensure traces are captured.

Async Support

Async clients are automatically instrumented:

from anthropic import AsyncAnthropic
 
client = AsyncAnthropic()
 
async def main():
    response = await client.messages.create(
        model="claude-3-5-sonnet-20241022",
        max_tokens=1024,
        messages=[{"role": "user", "content": "Hello!"}]
    )

Tool Use

Tool use is captured with full context:

response = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=[{"role": "user", "content": "What's the weather in Paris?"}],
    tools=[{
        "name": "get_weather",
        "description": "Get current weather",
        "input_schema": {
            "type": "object",
            "properties": {
                "location": {"type": "string"}
            }
        }
    }]
)

Risicare captures:

  • Tool definitions
  • Tool use blocks
  • Tool inputs and results

Captured Attributes

AttributeDescription
gen_ai.systemanthropic
gen_ai.request.modelRequested model name
gen_ai.response.modelModel name returned by API
gen_ai.response.idResponse ID
gen_ai.request.max_tokensMax output tokens
gen_ai.request.temperatureSampling temperature
gen_ai.request.streamWhether streaming was requested
gen_ai.request.has_toolsWhether tools were provided
gen_ai.usage.prompt_tokensInput tokens
gen_ai.usage.completion_tokensOutput tokens
gen_ai.usage.total_tokensTotal tokens
gen_ai.response.stop_reasonStop reason
gen_ai.completion.tool_usesNumber of tool use blocks
gen_ai.latency_msRequest latency in milliseconds

Cost Tracking

Costs are automatically calculated:

ModelInput (per 1M)Output (per 1M)
claude-opus-4-5$15.00$75.00
claude-sonnet-4-5$3.00$15.00
claude-haiku-4-5$0.80$4.00
claude-3-5-sonnet$3.00$15.00
claude-3-5-haiku$0.80$4.00
claude-3-opus$15.00$75.00
claude-3-haiku$0.25$1.25

System Prompts

System prompts are captured separately:

response = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    system="You are a helpful assistant.",
    messages=[{"role": "user", "content": "Hello!"}]
)

Next Steps