OpenClaw Reference (Mirrored)

MiniMax

Mirrored from OpenClaw (MIT)
This mirror is provided for convenience. OpenClawdBots is not affiliated with or endorsed by OpenClaw.

MiniMax

MiniMax is an AI company that builds the M2/M2.1 model family. The current coding-focused release is MiniMax M2.1 (December 23, 2025), built for real-world complex tasks.

Source: MiniMax M2.1 release note

Model overview (M2.1)

MiniMax highlights these improvements in M2.1:

  • Stronger multi-language coding (Rust, Java, Go, C++, Kotlin, Objective-C, TS/JS).
  • Better web/app development and aesthetic output quality (including native mobile).
  • Improved composite instruction handling for office-style workflows, building on interleaved thinking and integrated constraint execution.
  • More concise responses with lower token usage and faster iteration loops.
  • Stronger tool/agent framework compatibility and context management (Claude Code, Droid/Factory AI, Cline, Kilo Code, Roo Code, BlackBox).
  • Higher-quality dialogue and technical writing outputs.

MiniMax M2.1 vs MiniMax M2.1 Lightning

  • Speed: Lightning is the “fast” variant in MiniMax’s pricing docs.
  • Cost: Pricing shows the same input cost, but Lightning has higher output cost.
  • Coding plan routing: The Lightning back-end isn’t directly available on the MiniMax coding plan. MiniMax auto-routes most requests to Lightning, but falls back to the regular M2.1 back-end during traffic spikes.

Choose a setup

Best for: quick setup with MiniMax Coding Plan via OAuth, no API key required.

Enable the bundled OAuth plugin and authenticate:

openclaw plugins enable minimax-portal-auth  # skip if already loaded.
openclaw gateway restart  # restart if gateway is already running
openclaw onboard --auth-choice minimax-portal

You will be prompted to select an endpoint:

  • Global - International users (api.minimax.io)
  • CN - Users in China (api.minimaxi.com)

See MiniMax OAuth plugin README for details.

MiniMax M2.1 (API key)

Best for: hosted MiniMax with Anthropic-compatible API.

Configure via CLI:

  • Run openclaw configure
  • Select Model/auth
  • Choose MiniMax M2.1
{
  env: { MINIMAX_API_KEY: "sk-..." },
  agents: { defaults: { model: { primary: "minimax/MiniMax-M2.1" } } },
  models: {
    mode: "merge",
    providers: {
      minimax: {
        baseUrl: "https://api.minimax.io/anthropic",
        apiKey: "${MINIMAX_API_KEY}",
        api: "anthropic-messages",
        models: [
          {
            id: "MiniMax-M2.1",
            name: "MiniMax M2.1",
            reasoning: false,
            input: ["text"],
            cost: { input: 15, output: 60, cacheRead: 2, cacheWrite: 10 },
            contextWindow: 200000,
            maxTokens: 8192,
          },
        ],
      },
    },
  },
}

MiniMax M2.1 as fallback (Opus primary)

Best for: keep Opus 4.6 as primary, fail over to MiniMax M2.1.

{
  env: { MINIMAX_API_KEY: "sk-..." },
  agents: {
    defaults: {
      models: {
        "anthropic/claude-opus-4-6": { alias: "opus" },
        "minimax/MiniMax-M2.1": { alias: "minimax" },
      },
      model: {
        primary: "anthropic/claude-opus-4-6",
        fallbacks: ["minimax/MiniMax-M2.1"],
      },
    },
  },
}

Optional: Local via LM Studio (manual)

Best for: local inference with LM Studio. We have seen strong results with MiniMax M2.1 on powerful hardware (e.g. a desktop/server) using LM Studio's local server.

Configure manually via openclaw.json:

{
  agents: {
    defaults: {
      model: { primary: "lmstudio/minimax-m2.1-gs32" },
      models: { "lmstudio/minimax-m2.1-gs32": { alias: "Minimax" } },
    },
  },
  models: {
    mode: "merge",
    providers: {
      lmstudio: {
        baseUrl: "http://127.0.0.1:1234/v1",
        apiKey: "lmstudio",
        api: "openai-responses",
        models: [
          {
            id: "minimax-m2.1-gs32",
            name: "MiniMax M2.1 GS32",
            reasoning: false,
            input: ["text"],
            cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
            contextWindow: 196608,
            maxTokens: 8192,
          },
        ],
      },
    },
  },
}

Configure via openclaw configure

Use the interactive config wizard to set MiniMax without editing JSON:

  1. Run openclaw configure.
  2. Select Model/auth.
  3. Choose MiniMax M2.1.
  4. Pick your default model when prompted.

Configuration options

  • models.providers.minimax.baseUrl: prefer https://api.minimax.io/anthropic (Anthropic-compatible); https://api.minimax.io/v1 is optional for OpenAI-compatible payloads.
  • models.providers.minimax.api: prefer anthropic-messages; openai-completions is optional for OpenAI-compatible payloads.
  • models.providers.minimax.apiKey: MiniMax API key (MINIMAX_API_KEY).
  • models.providers.minimax.models: define id, name, reasoning, contextWindow, maxTokens, cost.
  • agents.defaults.models: alias models you want in the allowlist.
  • models.mode: keep merge if you want to add MiniMax alongside built-ins.

Notes

Troubleshooting

“Unknown model: minimax/MiniMax-M2.1”

This usually means the MiniMax provider isn’t configured (no provider entry and no MiniMax auth profile/env key found). A fix for this detection is in 2026.1.12 (unreleased at the time of writing). Fix by:

  • Upgrading to 2026.1.12 (or run from source main), then restarting the gateway.
  • Running openclaw configure and selecting MiniMax M2.1, or
  • Adding the models.providers.minimax block manually, or
  • Setting MINIMAX_API_KEY (or a MiniMax auth profile) so the provider can be injected.

Make sure the model id is case‑sensitive:

  • minimax/MiniMax-M2.1
  • minimax/MiniMax-M2.1-lightning

Then recheck with:

openclaw models list