Skip to main content
GitHub

SDK Parity

Feature comparison between the Python and JavaScript SDKs.

Feature comparison between the Python (v0.1.10) and JavaScript/TypeScript (v0.3.0) SDKs. Both SDKs share the same wire format and connect to the same gateway — differences are in API surface and integration breadth.

Core API

FeaturePythonJavaScriptNotes
init()risicare.init()init()Same config options
shutdown()risicare.shutdown(timeout_ms=5000)await shutdown()JS is async
flush()client.flush()await flush()JS is async
enable() / disable()risicare.enable()enable()
is_enabled()risicare.is_enabled()isEnabled()
get_client()risicare.get_client()JS uses getTracer() instead
get_tracer()risicare.get_tracer()getTracer()
reset_for_testing()risicare.reset_for_testing()Python only

Progressive Integration Tiers

TierFeaturePythonJavaScript
0Env var auto-instrumentationRISICARE_TRACING=trueRISICARE_TRACING=true
1Explicit initrisicare.init()init()
1@trace / trace()Decorator + context managertrace('name', fn) wrapper
2@agentDecoratoragent(opts, fn) wrapper
3@session / session_context()Decorator + context managersession(opts, fn) / withSession()
4Phase decorators@trace_think, @trace_decide, @trace_act, @trace_observetraceThink(fn), traceDecide(fn), traceAct(fn), traceObserve(fn)
5Multi-agent@trace_message, @trace_delegate, @trace_coordinatetraceMessage(opts, fn), traceDelegate(opts, fn), traceCoordinate(opts, fn)

Critical API behavior difference

Phase decorators behave differently between Python and JavaScript.

Python executes the function immediately and returns its result:

result = trace_think("analyze", lambda: "the answer")  # result = "the answer"

JavaScript returns a new function (decorator pattern) that you must call:

const fn = traceThink("analyze", () => "the answer");  // fn = [Function]
const result = fn();  // result = "the answer"
 
// Or call immediately:
const result = traceThink("analyze", () => "the answer")();

This applies to traceThink, traceDecide, traceAct, traceObserve, traceMessage, traceDelegate, and traceCoordinate. The agent() and session() wrappers follow the same pattern in both SDKs — they always return a callable.

Context Propagation

FeaturePythonJavaScript
MechanismcontextvarsAsyncLocalStorage
Thread propagationauto_patch=True patches ThreadPoolExecutorAutomatic via Node.js
Async propagationauto_patch=True patches asyncio.create_taskAutomatic via AsyncLocalStorage
session_context()Sync + async variantswithSession() (single API)
agent_context()Sync + async variantswithAgent() (single API)
phase_context()phase_context(SemanticPhase.THINK)withPhase(SemanticPhase.THINK, fn)
W3C inject_trace_context()inject_trace_context(headers)injectTraceContext(headers)
W3C extract_trace_context()extract_trace_context(headers)extractTraceContext(headers)
get_current_session()Returns SessionContext | NoneReturns SessionContext | undefined
get_current_agent()Returns AgentContext | NoneReturns AgentContext | undefined
get_current_span()Returns Span | NoneReturns Span | undefined
Span registryregister_span(), get_span_by_id(), unregister_span()registerSpan(), getSpanById(), unregisterSpan()

LLM Provider Support

ProviderPythonJavaScript
OpenAIAuto-instrumentedpatchOpenAI()
AnthropicAuto-instrumentedpatchAnthropic()
Vercel AI SDKpatchVercelAI()
Google GeminiAuto-instrumentedpatchGoogleAI()
CohereAuto-instrumentedpatchCohere()
MistralAuto-instrumentedpatchMistral()
GroqAuto-instrumentedpatchGroq()
Together AIAuto-instrumentedpatchTogether()
OllamaAuto-instrumentedpatchOllama()
Amazon BedrockAuto-instrumentedpatchBedrock()
Google Vertex AIAuto-instrumentedVia patchGoogleAI()
CerebrasAuto-instrumentedpatchCerebras()
HuggingFaceAuto-instrumentedpatchHuggingFace()
OpenAI-compatible (base_url)Host detection (8 providers)Host detection via patchOpenAI()

Both SDKs support 12 native providers plus 8 host-detected providers (DeepSeek, xAI, Fireworks, Baseten, Novita, BytePlus, vLLM, and any OpenAI-compatible API). Python auto-patches on import; JavaScript requires an explicit patchX() call.

Instrumentation Style

PythonJavaScript
MethodImport hooks (sys.meta_path) — automaticES Proxy wrapping — manual patchX() call
WhenOn first import openai after init()After calling patchOpenAI(new OpenAI())
Controlinstall_import_hooks() / remove_import_hooks()Call or skip patchX()
Checkis_instrumented("openai")
Listget_supported_modules()

Framework Support

FrameworkPythonJavaScript
LangChainCallback + patchesRisicareCallbackHandler
LangGraphPatchesinstrumentLangGraph()
InstructorPatchespatchInstructor()
LlamaIndexSpan handlerRisicareLlamaIndexHandler
CrewAIPatches— (Python only)
AutoGenPatches (v0.2 + v0.4)— (Python only)
OpenAI Agents SDKPatches— (Python only)
LiteLLMCallback— (Python only)
DSPyCallback— (Python only)
Pydantic AIPatches— (Python only)

Python supports 10 framework integrations. JavaScript supports 4 (LangChain.js, LangGraph.js, Instructor, LlamaIndex.TS). The remaining 6 frameworks are Python-only — no JavaScript npm packages exist for CrewAI, AutoGen, LiteLLM, DSPy, or Pydantic AI.

Exporters

ExporterPythonJavaScript
HttpExporterHTTP/2 via httpxNative fetch
ConsoleExporterTo stderrTo stdout
BatchSpanProcessorTimer + count thresholdsetInterval + count threshold
OTLPExporterOTLP/HTTP JSON + circuit breaker
SpanExporter baseAbstract classInterface

Fix Runtime

FeaturePythonJavaScript
FixRuntimeFull implementationFull implementation
init_runtime()init_runtime(config)initFixRuntime() (called by init())
get_runtime()get_runtime()getFixRuntime()
shutdown_runtime()shutdown_runtime()shutdownFixRuntime()
FixLoaderLoads fixes from APILoads fixes from API
FixApplierApplies fixes to functionsApplies fixes to functions
FixCacheLocal fix cachingLocal fix caching
FixInterceptorRoutes calls via A/B testinterceptCall() in OpenAI patch

Both SDKs wire the Fix Runtime into init() automatically. The OpenAI provider patch in JS calls getFixRuntime().interceptCall() before every LLM call to check for active fixes.

OpenTelemetry

FeaturePythonJavaScript
OTel bridgeotel_bridge=True in init()
RisicareSpanExporterPlugs into OTel SDK
RisicareSpanProcessorPlugs into OTel SDK
OTLP exportOTLPExporter class
OTLP ingestionGateway accepts OTLP/HTTPGateway accepts OTLP/HTTP

Configuration

OptionPythonJavaScript
api_keyapi_keyapiKey
endpointendpointendpoint
environmentenvironmentenvironment
service_nameservice_nameserviceName
service_versionservice_versionserviceVersion
enabledenabledenabled
trace_contenttrace_contenttraceContent
sample_ratesample_ratesampleRate
batch_sizebatch_sizebatchSize
batch_timeout_msbatch_timeout_msbatchTimeoutMs
max_queue_sizemaxQueueSize
auto_patchauto_patch (default True)
debugdebugdebug
compresscompress
metadatametadatametadata
exportersexporters
otlp_endpointotlp_endpoint
otlp_headersotlp_headers
otel_bridgeotel_bridge
project_id (deprecated)Emits DeprecationWarningEmits console.warn

Enums

SpanKind, SpanStatus, and SemanticPhase have identical values in both SDKs. AgentRole and MessageType differ:

EnumPythonJavaScript
SpanKind17 values (identical)17 values (identical)
SpanStatusUNSET, OK, ERRORUNSET, OK, ERROR
SemanticPhaseTHINK, DECIDE, ACT, OBSERVE, REFLECT, COMMUNICATE, COORDINATESame 7 values
AgentRole12 values14 values (superset — adds REVIEWER, CUSTOM)
MessageType15 values18 values (superset — adds REQUEST, DELEGATE, COORDINATE)

Wire Format

Both SDKs produce identical JSON payloads sent to POST /v1/spans. The gateway does not distinguish between Python and JavaScript origins. All 37 SpanData fields are supported by both SDKs.

Next Steps