Skip to main content
GitHub

OpenAI (JS)

Instrument OpenAI in Node.js/TypeScript.

Auto-instrument the OpenAI Node.js SDK with a single function call.

Installation

npm install risicare openai

Quick Start

import { init } from 'risicare';
import { patchOpenAI } from 'risicare/openai';
import OpenAI from 'openai';
 
// Initialize Risicare
init();
 
// Wrap the OpenAI client
const openai = patchOpenAI(new OpenAI());
 
// All calls are now traced
const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Hello!' }],
});

How It Works

patchOpenAI() returns an ES Proxy that wraps all OpenAI methods:

import { patchOpenAI } from 'risicare/openai';
 
// The original client is wrapped, not modified
const original = new OpenAI();
const traced = patchOpenAI(original);
 
// Both work, but only `traced` creates spans
await original.chat.completions.create(...); // Not traced
await traced.chat.completions.create(...);   // Traced

Captured Attributes

AttributeDescription
gen_ai.systemopenai (or detected provider for compatible APIs)
gen_ai.request.modelRequested model name
gen_ai.response.modelModel name returned by API
gen_ai.response.idResponse ID
gen_ai.request.temperatureSampling temperature
gen_ai.request.max_tokensMax output tokens
gen_ai.request.streamWhether streaming was requested
gen_ai.request.has_toolsWhether tools were provided
gen_ai.usage.prompt_tokensInput tokens
gen_ai.usage.completion_tokensOutput tokens
gen_ai.usage.total_tokensTotal tokens
gen_ai.completion.tool_callsNumber of tool calls made
gen_ai.completion.finish_reasonStop reason
gen_ai.latency_msRequest latency in milliseconds

Streaming Support

Streaming is fully supported:

const stream = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Write a story' }],
  stream: true,
});
 
// Span is created and tokens are accumulated automatically
for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
// Span completes with total token counts

OpenAI-Compatible Providers

The proxy automatically detects OpenAI-compatible endpoints:

import { patchOpenAI } from 'risicare/openai';
import OpenAI from 'openai';
 
// Works with any OpenAI-compatible provider
const together = patchOpenAI(
  new OpenAI({
    baseURL: 'https://api.together.xyz/v1',
    apiKey: process.env.TOGETHER_API_KEY,
  })
);
 
// Spans will show provider as "together" (detected from baseURL)
await together.chat.completions.create({
  model: 'meta-llama/Llama-3-70b-chat-hf',
  messages: [{ role: 'user', content: 'Hello!' }],
});

Supported Compatible Providers

ProviderBase URLAuto-detected
Together AIapi.together.xyzYes
Groqapi.groq.comYes
DeepSeekapi.deepseek.comYes

Tool Calls

Tool/function calls are automatically traced:

const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'What is the weather?' }],
  tools: [
    {
      type: 'function',
      function: {
        name: 'get_weather',
        parameters: { type: 'object', properties: { location: { type: 'string' } } },
      },
    },
  ],
});
 
// Tool calls appear as child spans with:
// - tool.name: "get_weather"
// - tool.arguments: {"location": "..."}

Configuration

Disable Content Capture

init({
  traceContent: false, // Don't capture prompts/completions
});

Custom Metadata

init({
  metadata: {
    feature: 'chat',
    team: 'platform',
  },
});

Next Steps