OpenAI (JS)
Instrument OpenAI in Node.js/TypeScript.
Auto-instrument the OpenAI Node.js SDK with a single function call.
Installation
npm install risicare openaiQuick Start
import { init } from 'risicare';
import { patchOpenAI } from 'risicare/openai';
import OpenAI from 'openai';
// Initialize Risicare
init();
// Wrap the OpenAI client
const openai = patchOpenAI(new OpenAI());
// All calls are now traced
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
});How It Works
patchOpenAI() returns an ES Proxy that wraps all OpenAI methods:
import { patchOpenAI } from 'risicare/openai';
// The original client is wrapped, not modified
const original = new OpenAI();
const traced = patchOpenAI(original);
// Both work, but only `traced` creates spans
await original.chat.completions.create(...); // Not traced
await traced.chat.completions.create(...); // TracedCaptured Attributes
| Attribute | Description |
|---|---|
gen_ai.system | openai (or detected provider for compatible APIs) |
gen_ai.request.model | Requested model name |
gen_ai.response.model | Model name returned by API |
gen_ai.response.id | Response ID |
gen_ai.request.temperature | Sampling temperature |
gen_ai.request.max_tokens | Max output tokens |
gen_ai.request.stream | Whether streaming was requested |
gen_ai.request.has_tools | Whether tools were provided |
gen_ai.usage.prompt_tokens | Input tokens |
gen_ai.usage.completion_tokens | Output tokens |
gen_ai.usage.total_tokens | Total tokens |
gen_ai.completion.tool_calls | Number of tool calls made |
gen_ai.completion.finish_reason | Stop reason |
gen_ai.latency_ms | Request latency in milliseconds |
Streaming Support
Streaming is fully supported:
const stream = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Write a story' }],
stream: true,
});
// Span is created and tokens are accumulated automatically
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
// Span completes with total token countsOpenAI-Compatible Providers
The proxy automatically detects OpenAI-compatible endpoints:
import { patchOpenAI } from 'risicare/openai';
import OpenAI from 'openai';
// Works with any OpenAI-compatible provider
const together = patchOpenAI(
new OpenAI({
baseURL: 'https://api.together.xyz/v1',
apiKey: process.env.TOGETHER_API_KEY,
})
);
// Spans will show provider as "together" (detected from baseURL)
await together.chat.completions.create({
model: 'meta-llama/Llama-3-70b-chat-hf',
messages: [{ role: 'user', content: 'Hello!' }],
});Supported Compatible Providers
| Provider | Base URL | Auto-detected |
|---|---|---|
| Together AI | api.together.xyz | Yes |
| Groq | api.groq.com | Yes |
| DeepSeek | api.deepseek.com | Yes |
Tool Calls
Tool/function calls are automatically traced:
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'What is the weather?' }],
tools: [
{
type: 'function',
function: {
name: 'get_weather',
parameters: { type: 'object', properties: { location: { type: 'string' } } },
},
},
],
});
// Tool calls appear as child spans with:
// - tool.name: "get_weather"
// - tool.arguments: {"location": "..."}Configuration
Disable Content Capture
init({
traceContent: false, // Don't capture prompts/completions
});Custom Metadata
init({
metadata: {
feature: 'chat',
team: 'platform',
},
});