Mistral
Auto-instrumentation for Mistral API.
Risicare automatically instruments the Mistral Python SDK.
SDK Version Support
Risicare supports both Mistral SDK v1.0+ (mistralai.Mistral with chat.complete) and legacy versions (MistralClient.chat). Both are auto-instrumented.
Installation
pip install risicare mistralaiBasic Usage
import risicare
from mistralai import Mistral
risicare.init()
client = Mistral(api_key="your-api-key")
# Automatically traced
response = client.chat.complete(
model="mistral-large-latest",
messages=[
{"role": "user", "content": "What is the capital of France?"}
]
)Supported Methods
| Method | Traced |
|---|---|
chat.complete | Yes (sync) |
chat.complete_async | Yes (async) |
Streaming
Streaming Not Instrumented
Streaming via chat.stream is not instrumented. The SDK only patches chat.complete and chat.complete_async. Use the non-streaming methods for traced calls.
stream = client.chat.stream(
model="mistral-large-latest",
messages=[{"role": "user", "content": "Write a poem"}]
)
for chunk in stream:
print(chunk.data.choices[0].delta.content, end="")Async Support
Async methods are automatically traced:
response = await client.chat.complete_async(
model="mistral-large-latest",
messages=[{"role": "user", "content": "Hello!"}]
)Function Calling
Tool use is captured:
response = client.chat.complete(
model="mistral-large-latest",
messages=[{"role": "user", "content": "What's the weather?"}],
tools=[{
"type": "function",
"function": {
"name": "get_weather",
"parameters": {"type": "object", "properties": {}}
}
}]
)Captured Attributes
| Attribute | Description |
|---|---|
gen_ai.system | mistral |
gen_ai.request.model | Requested model name |
gen_ai.response.model | Model name returned by API |
gen_ai.response.id | Response ID |
gen_ai.request.temperature | Sampling temperature |
gen_ai.request.max_tokens | Max output tokens |
gen_ai.usage.prompt_tokens | Input tokens |
gen_ai.usage.completion_tokens | Output tokens |
gen_ai.usage.total_tokens | Total tokens |
gen_ai.completion.tool_calls | Number of tool calls |
gen_ai.completion.finish_reason | Stop reason |
gen_ai.latency_ms | Request latency in milliseconds |
Cost Tracking
| Model | Input (per 1M) | Output (per 1M) |
|---|---|---|
| mistral-large | $2.00 | $6.00 |
| mistral-small | $0.20 | $0.60 |
| codestral | $0.20 | $0.60 |
| pixtral-large | $2.00 | $6.00 |
| ministral-8b | $0.10 | $0.10 |