OpenClaw Reference (Mirrored)

Mistral

Mirrored from OpenClaw (MIT)
This mirror is provided for convenience. OpenClawdBots is not affiliated with or endorsed by OpenClaw.

Mistral

OpenClaw supports Mistral for both text/image model routing (mistral/...) and audio transcription via Voxtral in media understanding. Mistral can also be used for memory embeddings (memorySearch.provider = "mistral").

  • Provider: mistral
  • Auth: MISTRAL_API_KEY
  • API: Mistral Chat Completions (https://api.mistral.ai/v1)

Getting started

  1. Get your API key

    Create an API key in the Mistral Console.

  2. Run onboarding
    openclaw onboard --auth-choice mistral-api-key
    

    Or pass the key directly:

    openclaw onboard --mistral-api-key "$MISTRAL_API_KEY"
    
  3. Set a default model
    {
      env: { MISTRAL_API_KEY: "sk-..." },
      agents: { defaults: { model: { primary: "mistral/mistral-large-latest" } } },
    }
    
  4. Verify the model is available
    openclaw models list --provider mistral
    

Built-in LLM catalog

OpenClaw currently ships this bundled Mistral catalog:

Model refInputContextMax outputNotes
mistral/mistral-large-latesttext, image262,14416,384Default model
mistral/mistral-medium-2508text, image262,1448,192Mistral Medium 3.1
mistral/mistral-small-latesttext, image128,00016,384Mistral Small 4; adjustable reasoning via API reasoning_effort
mistral/pixtral-large-latesttext, image128,00032,768Pixtral
mistral/codestral-latesttext256,0004,096Coding
mistral/devstral-medium-latesttext262,14432,768Devstral 2
mistral/magistral-smalltext128,00040,000Reasoning-enabled

Audio transcription (Voxtral)

Use Voxtral for audio transcription through the media understanding pipeline.

{
  tools: {
    media: {
      audio: {
        enabled: true,
        models: [{ provider: "mistral", model: "voxtral-mini-latest" }],
      },
    },
  },
}
TIP

The media transcription path uses /v1/audio/transcriptions. The default audio model for Mistral is voxtral-mini-latest.

Advanced configuration

Adjustable reasoning (mistral-small-latest)

mistral/mistral-small-latest maps to Mistral Small 4 and supports adjustable reasoning on the Chat Completions API via reasoning_effort (none minimizes extra thinking in the output; high surfaces full thinking traces before the final answer).

OpenClaw maps the session thinking level to Mistral's API:

OpenClaw thinking levelMistral reasoning_effort
off / minimalnone
low / medium / high / xhigh / adaptivehigh
NOTE

Other bundled Mistral catalog models do not use this parameter. Keep using magistral-* models when you want Mistral's native reasoning-first behavior.

Memory embeddings

Mistral can serve memory embeddings via /v1/embeddings (default model: mistral-embed).

{
  memorySearch: { provider: "mistral" },
}
Auth and base URL
  • Mistral auth uses MISTRAL_API_KEY.
  • Provider base URL defaults to https://api.mistral.ai/v1.
  • Onboarding default model is mistral/mistral-large-latest.
  • Z.AI uses Bearer auth with your API key.