Vercel AI (JS)
Instrument Vercel AI SDK.
Auto-instrument the Vercel AI SDK for unified tracing across providers.
Installation
npm install risicare ai @ai-sdk/openaiQuick Start
import { init } from 'risicare';
import { patchVercelAI } from 'risicare/vercel-ai';
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
// Initialize Risicare
init();
// Get traced wrapper functions
const { tracedGenerateText, tracedStreamText, tracedGenerateObject } = patchVercelAI();
// Wrap the Vercel AI functions
const wrappedGenerateText = tracedGenerateText(generateText);
// Use the wrapped function — traced automatically
const { text } = await wrappedGenerateText({
model: openai('gpt-4o'),
prompt: 'Hello!',
});How It Works
patchVercelAI() returns higher-order functions that wrap Vercel AI SDK functions. Each HOF takes the original function and returns a traced version:
import { patchVercelAI } from 'risicare/vercel-ai';
import { generateText, streamText, generateObject } from 'ai';
const { tracedGenerateText, tracedStreamText, tracedGenerateObject } = patchVercelAI();
// Wrap each function you want to trace
const generate = tracedGenerateText(generateText);
const stream = tracedStreamText(streamText);
const genObject = tracedGenerateObject(generateObject);Available Wrappers
| Wrapper | Wraps | Description |
|---|---|---|
tracedGenerateText | generateText | Single text generation |
tracedStreamText | streamText | Streaming text generation |
tracedGenerateObject | generateObject | Structured output generation |
Captured Attributes
| Attribute | Description |
|---|---|
gen_ai.system | Provider (openai, anthropic, etc.) |
gen_ai.request.model | Model identifier |
gen_ai.usage.prompt_tokens | Input tokens |
gen_ai.usage.completion_tokens | Output tokens |
Streaming
Streaming is fully supported with token accumulation:
import { patchVercelAI } from 'risicare/vercel-ai';
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
const { tracedStreamText } = patchVercelAI();
const wrappedStreamText = tracedStreamText(streamText);
const result = await wrappedStreamText({
model: openai('gpt-4o'),
prompt: 'Write a story',
});
// Span tracks the stream
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}
// Final usage is captured when stream completes
const usage = await result.usage;Structured Output
Object generation is traced with schema information:
import { patchVercelAI } from 'risicare/vercel-ai';
import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
const { tracedGenerateObject } = patchVercelAI();
const wrappedGenerateObject = tracedGenerateObject(generateObject);
const { object } = await wrappedGenerateObject({
model: openai('gpt-4o'),
schema: z.object({
name: z.string(),
age: z.number(),
}),
prompt: 'Generate a person',
});Multi-Provider Support
Vercel AI supports multiple providers — all are traced:
import { patchVercelAI } from 'risicare/vercel-ai';
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
import { google } from '@ai-sdk/google';
const { tracedGenerateText } = patchVercelAI();
const generate = tracedGenerateText(generateText);
// OpenAI - traced with gen_ai.system = "openai"
await generate({ model: openai('gpt-4o'), prompt: '...' });
// Anthropic - traced with gen_ai.system = "anthropic"
await generate({ model: anthropic('claude-sonnet-4-20250514'), prompt: '...' });
// Google - traced with gen_ai.system = "google"
await generate({ model: google('gemini-pro'), prompt: '...' });Tool Calls
Tool execution is automatically traced:
import { patchVercelAI } from 'risicare/vercel-ai';
import { generateText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
const { tracedGenerateText } = patchVercelAI();
const generate = tracedGenerateText(generateText);
const { text, toolCalls } = await generate({
model: openai('gpt-4o'),
tools: {
weather: tool({
description: 'Get weather for a location',
parameters: z.object({ location: z.string() }),
execute: async ({ location }) => {
return { temperature: 72, condition: 'sunny' };
},
}),
},
prompt: 'What is the weather in Paris?',
});Next.js Integration
For Next.js applications:
// app/api/chat/route.ts
import { init } from 'risicare';
import { patchVercelAI } from 'risicare/vercel-ai';
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
// Initialize once
init();
const { tracedStreamText } = patchVercelAI();
const stream = tracedStreamText(streamText);
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await stream({
model: openai('gpt-4o'),
messages,
});
return result.toDataStreamResponse();
}