Skip to main content
GitHub

Cerebras

Auto-instrument Cerebras for hardware-accelerated inference.

Risicare automatically instruments the Cerebras SDK for wafer-scale inference.

Installation

pip install risicare cerebras-cloud-sdk

Auto-Instrumentation

import risicare
from cerebras.cloud.sdk import Cerebras
 
risicare.init()
 
client = Cerebras()
 
# Automatically traced
response = client.chat.completions.create(
    model="llama3.1-70b",
    messages=[{"role": "user", "content": "Hello!"}]
)

Captured Attributes

AttributeDescription
gen_ai.systemcerebras
gen_ai.request.modelRequested model name
gen_ai.response.modelModel name returned by API
gen_ai.response.idResponse ID
gen_ai.request.temperatureSampling temperature
gen_ai.request.max_tokensMax output tokens
gen_ai.request.streamWhether streaming was requested
gen_ai.request.has_toolsWhether tools were provided
gen_ai.usage.prompt_tokensInput tokens
gen_ai.usage.completion_tokensOutput tokens
gen_ai.usage.total_tokensTotal tokens
gen_ai.completion.tool_callsNumber of tool calls made
gen_ai.completion.finish_reasonStop reason
gen_ai.latency_msRequest latency in milliseconds
cerebras.queue_timeQueue wait time
cerebras.prompt_timePrompt processing time
cerebras.completion_timeCompletion generation time

Cerebras provides detailed timing breakdown via the time_info object, showing queue wait time, prompt processing time, and completion generation time.

Streaming

stream = client.chat.completions.create(
    model="llama3.1-70b",
    messages=[{"role": "user", "content": "Write a story"}],
    stream=True
)
 
for chunk in stream:
    print(chunk.choices[0].delta.content or "", end="")

Supported Models

ModelDescription
llama3.1-70bLlama 3.1 70B
llama3.1-8bLlama 3.1 8B

Next Steps