Mistral
OpenClaw supports Mistral for both text/image model routing (mistral/...) and
audio transcription via Voxtral in media understanding.
Mistral can also be used for memory embeddings (memorySearch.provider = "mistral").
- Provider:
mistral - Auth:
MISTRAL_API_KEY - API: Mistral Chat Completions (
https://api.mistral.ai/v1)
Getting started
- Get your API key
Create an API key in the Mistral Console.
- Run onboarding
openclaw onboard --auth-choice mistral-api-keyOr pass the key directly:
openclaw onboard --mistral-api-key "$MISTRAL_API_KEY" - Set a default model
{ env: { MISTRAL_API_KEY: "sk-..." }, agents: { defaults: { model: { primary: "mistral/mistral-large-latest" } } }, } - Verify the model is available
openclaw models list --provider mistral
Built-in LLM catalog
OpenClaw currently ships this bundled Mistral catalog:
| Model ref | Input | Context | Max output | Notes |
|---|---|---|---|---|
mistral/mistral-large-latest | text, image | 262,144 | 16,384 | Default model |
mistral/mistral-medium-2508 | text, image | 262,144 | 8,192 | Mistral Medium 3.1 |
mistral/mistral-small-latest | text, image | 128,000 | 16,384 | Mistral Small 4; adjustable reasoning via API reasoning_effort |
mistral/pixtral-large-latest | text, image | 128,000 | 32,768 | Pixtral |
mistral/codestral-latest | text | 256,000 | 4,096 | Coding |
mistral/devstral-medium-latest | text | 262,144 | 32,768 | Devstral 2 |
mistral/magistral-small | text | 128,000 | 40,000 | Reasoning-enabled |
Audio transcription (Voxtral)
Use Voxtral for audio transcription through the media understanding pipeline.
{
tools: {
media: {
audio: {
enabled: true,
models: [{ provider: "mistral", model: "voxtral-mini-latest" }],
},
},
},
}
The media transcription path uses /v1/audio/transcriptions. The default audio model for Mistral is voxtral-mini-latest.
Advanced configuration
Adjustable reasoning (mistral-small-latest)
mistral/mistral-small-latest maps to Mistral Small 4 and supports adjustable reasoning on the Chat Completions API via reasoning_effort (none minimizes extra thinking in the output; high surfaces full thinking traces before the final answer).
OpenClaw maps the session thinking level to Mistral's API:
| OpenClaw thinking level | Mistral reasoning_effort |
|---|---|
| off / minimal | none |
| low / medium / high / xhigh / adaptive | high |
Other bundled Mistral catalog models do not use this parameter. Keep using magistral-* models when you want Mistral's native reasoning-first behavior.
Memory embeddings
Mistral can serve memory embeddings via /v1/embeddings (default model: mistral-embed).
{
memorySearch: { provider: "mistral" },
}
Auth and base URL
- Mistral auth uses
MISTRAL_API_KEY. - Provider base URL defaults to
https://api.mistral.ai/v1. - Onboarding default model is
mistral/mistral-large-latest. - Z.AI uses Bearer auth with your API key.