Parameter Fixes
Adjust LLM parameters.
Parameter fixes adjust LLM configuration like temperature, max_tokens, etc.
Use Cases
- Reduce hallucination (lower temperature)
- Get longer responses (increase max_tokens)
- Increase diversity (adjust top_p)
- Control repetition (penalties)
Configuration
Parameters are nested inside a parameters dict within config:
{
"fix_id": "fix-param-001",
"fix_type": "parameter",
"config": {
"parameters": {
"temperature": 0.3,
"max_tokens": 4096
}
}
}Supported Parameters
| Parameter | Type | Description |
|---|---|---|
temperature | float | Sampling randomness (0-2) |
max_tokens | int | Maximum output tokens |
top_p | float | Nucleus sampling (0-1) |
top_k | int | Top-k sampling |
frequency_penalty | float | Reduce repetition (-2 to 2) |
presence_penalty | float | Encourage new topics (-2 to 2) |
stop | string[] | Stop sequences |
timeout_ms | int | Request timeout |
Common Fixes
For Hallucination
Lower temperature for more deterministic output:
{
"fix_type": "parameter",
"config": {
"parameters": {
"temperature": 0.1
}
}
}For Truncated Output
Increase max_tokens:
{
"fix_type": "parameter",
"config": {
"parameters": {
"max_tokens": 8192
}
}
}For Repetitive Output
Add penalties:
{
"fix_type": "parameter",
"config": {
"parameters": {
"frequency_penalty": 0.5,
"presence_penalty": 0.3
}
}
}For Timeouts
Increase timeout:
{
"fix_type": "parameter",
"config": {
"parameters": {
"timeout_ms": 60000
}
}
}Provider Compatibility
Parameters are normalized across providers:
| Risicare | OpenAI | Anthropic | |
|---|---|---|---|
temperature | temperature | temperature | temperature |
max_tokens | max_tokens | max_tokens | max_output_tokens |
top_p | top_p | top_p | top_p |
top_k | - | top_k | top_k |
Targeting
Target specific models or scenarios:
{
"target": {
"models": ["gpt-4o"],
"error_codes": ["OUTPUT.CONTENT.REPETITIVE"]
}
}