OpenAI
Auto-instrumentation for OpenAI API.
Risicare automatically instruments the OpenAI Python SDK.
Installation
pip install risicare openaiBasic Usage
import risicare
from openai import OpenAI
risicare.init()
client = OpenAI()
# Automatically traced
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
]
)Supported Methods
| Method | Traced |
|---|---|
chat.completions.create | Yes (sync + async) |
embeddings.create | Yes (sync + async) |
Streaming
Streaming responses are fully supported:
stream = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Write a poem"}],
stream=True
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")The span completes when the stream is fully consumed.
Async Support
Async clients are automatically instrumented:
from openai import AsyncOpenAI
client = AsyncOpenAI()
async def main():
response = await client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)Function Calling
Tool/function calls are captured with full context:
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What's the weather?"}],
tools=[{
"type": "function",
"function": {
"name": "get_weather",
"parameters": {"type": "object", "properties": {}}
}
}]
)Risicare captures:
- Tool definitions
- Tool calls made by the model
- Tool call arguments
Captured Attributes
| Attribute | Description |
|---|---|
gen_ai.system | openai (or detected provider for compatible APIs) |
gen_ai.request.model | Requested model name |
gen_ai.response.model | Model name returned by API |
gen_ai.response.id | Response ID |
gen_ai.request.temperature | Sampling temperature |
gen_ai.request.max_tokens | Max output tokens |
gen_ai.request.stream | Whether streaming was requested |
gen_ai.request.has_tools | Whether tools were provided |
gen_ai.usage.prompt_tokens | Input tokens |
gen_ai.usage.completion_tokens | Output tokens |
gen_ai.usage.total_tokens | Total tokens |
gen_ai.completion.tool_calls | Number of tool calls made |
gen_ai.completion.finish_reason | Stop reason |
gen_ai.latency_ms | Request latency in milliseconds |
For embedding calls (embeddings.create), Risicare also captures:
| Attribute | Description |
|---|---|
gen_ai.operation | embeddings |
gen_ai.input.count | Number of input texts |
gen_ai.response.embeddings | Number of embeddings returned |
gen_ai.response.dimensions | Embedding dimensions |
Cost Tracking
Costs are automatically calculated:
| Model | Input (per 1M) | Output (per 1M) |
|---|---|---|
| gpt-4o | $2.50 | $10.00 |
| gpt-4o-mini | $0.15 | $0.60 |
| gpt-4-turbo | $10.00 | $30.00 |
| gpt-3.5-turbo | $0.50 | $1.50 |
Disable Content Capture
To disable prompt/completion capture:
risicare.init(trace_content=False)