Skip to main content
GitHub

Mistral

Auto-instrumentation for Mistral API.

Risicare automatically instruments the Mistral Python SDK.

SDK Version Support

Risicare supports both Mistral SDK v1.0+ (mistralai.Mistral with chat.complete) and legacy versions (MistralClient.chat). Both are auto-instrumented.

Installation

pip install risicare mistralai

Basic Usage

import risicare
from mistralai import Mistral
 
risicare.init()
 
client = Mistral(api_key="your-api-key")
 
# Automatically traced
response = client.chat.complete(
    model="mistral-large-latest",
    messages=[
        {"role": "user", "content": "What is the capital of France?"}
    ]
)

Supported Methods

MethodTraced
chat.completeYes (sync)
chat.complete_asyncYes (async)

Streaming

Streaming Not Instrumented

Streaming via chat.stream is not instrumented. The SDK only patches chat.complete and chat.complete_async. Use the non-streaming methods for traced calls.

stream = client.chat.stream(
    model="mistral-large-latest",
    messages=[{"role": "user", "content": "Write a poem"}]
)
 
for chunk in stream:
    print(chunk.data.choices[0].delta.content, end="")

Async Support

Async methods are automatically traced:

response = await client.chat.complete_async(
    model="mistral-large-latest",
    messages=[{"role": "user", "content": "Hello!"}]
)

Function Calling

Tool use is captured:

response = client.chat.complete(
    model="mistral-large-latest",
    messages=[{"role": "user", "content": "What's the weather?"}],
    tools=[{
        "type": "function",
        "function": {
            "name": "get_weather",
            "parameters": {"type": "object", "properties": {}}
        }
    }]
)

Captured Attributes

AttributeDescription
gen_ai.systemmistral
gen_ai.request.modelRequested model name
gen_ai.response.modelModel name returned by API
gen_ai.response.idResponse ID
gen_ai.request.temperatureSampling temperature
gen_ai.request.max_tokensMax output tokens
gen_ai.usage.prompt_tokensInput tokens
gen_ai.usage.completion_tokensOutput tokens
gen_ai.usage.total_tokensTotal tokens
gen_ai.completion.tool_callsNumber of tool calls
gen_ai.completion.finish_reasonStop reason
gen_ai.latency_msRequest latency in milliseconds

Cost Tracking

ModelInput (per 1M)Output (per 1M)
mistral-large$2.00$6.00
mistral-small$0.20$0.60
codestral$0.20$0.60
pixtral-large$2.00$6.00
ministral-8b$0.10$0.10

Next Steps