OpenClaw Reference (Mirrored)

OpenAI

Mirrored from OpenClaw (MIT)
This mirror is provided for convenience. OpenClawdBots is not affiliated with or endorsed by OpenClaw.

OpenAI

OpenAI provides developer APIs for GPT models. OpenClaw supports two auth routes:

  • API key — direct OpenAI Platform access with usage-based billing (openai/* models)
  • Codex subscription — ChatGPT/Codex sign-in with subscription access (openai-codex/* models)

OpenAI explicitly supports subscription OAuth usage in external tools and workflows like OpenClaw.

Getting started

Choose your preferred auth method and follow the setup steps.

API key (OpenAI Platform)

Best for: direct API access and usage-based billing.

  1. Get your API key

    Create or copy an API key from the OpenAI Platform dashboard.

  2. Run onboarding
    openclaw onboard --auth-choice openai-api-key
    

    Or pass the key directly:

    openclaw onboard --openai-api-key "$OPENAI_API_KEY"
    
  3. Verify the model is available
    openclaw models list --provider openai
    

Route summary

Model refRouteAuth
openai/gpt-5.4Direct OpenAI Platform APIOPENAI_API_KEY
openai/gpt-5.4-proDirect OpenAI Platform APIOPENAI_API_KEY
NOTE

ChatGPT/Codex sign-in is routed through openai-codex/*, not openai/*.

Config example

{
  env: { OPENAI_API_KEY: "sk-..." },
  agents: { defaults: { model: { primary: "openai/gpt-5.4" } } },
}
WARNING

OpenClaw does not expose openai/gpt-5.3-codex-spark on the direct API path. Live OpenAI API requests reject that model. Spark is Codex-only.

Codex subscription

Best for: using your ChatGPT/Codex subscription instead of a separate API key. Codex cloud requires ChatGPT sign-in.

  1. Run Codex OAuth
    openclaw onboard --auth-choice openai-codex
    

    Or run OAuth directly:

    openclaw models auth login --provider openai-codex
    
  2. Set the default model
    openclaw config set agents.defaults.model.primary openai-codex/gpt-5.4
    
  3. Verify the model is available
    openclaw models list --provider openai-codex
    

Route summary

Model refRouteAuth
openai-codex/gpt-5.4ChatGPT/Codex OAuthCodex sign-in
openai-codex/gpt-5.3-codex-sparkChatGPT/Codex OAuthCodex sign-in (entitlement-dependent)
NOTE

This route is intentionally separate from openai/gpt-5.4. Use openai/* with an API key for direct Platform access, and openai-codex/* for Codex subscription access.

Config example

{
  agents: { defaults: { model: { primary: "openai-codex/gpt-5.4" } } },
}
TIP

If onboarding reuses an existing Codex CLI login, those credentials stay managed by Codex CLI. On expiry, OpenClaw re-reads the external Codex source first and writes the refreshed credential back to Codex storage.

Context window cap

OpenClaw treats model metadata and the runtime context cap as separate values.

For openai-codex/gpt-5.4:

  • Native contextWindow: 1050000
  • Default runtime contextTokens cap: 272000

The smaller default cap has better latency and quality characteristics in practice. Override it with contextTokens:

{
  models: {
    providers: {
      "openai-codex": {
        models: [{ id: "gpt-5.4", contextTokens: 160000 }],
      },
    },
  },
}
NOTE

Use contextWindow to declare native model metadata. Use contextTokens to limit the runtime context budget.

Image generation

The bundled openai plugin registers image generation through the image_generate tool.

CapabilityValue
Default modelopenai/gpt-image-1
Max images per request4
Edit modeEnabled (up to 5 reference images)
Size overridesSupported
Aspect ratio / resolutionNot forwarded to OpenAI Images API
{
  agents: {
    defaults: {
      imageGenerationModel: { primary: "openai/gpt-image-1" },
    },
  },
}
NOTE

See Image Generation for shared tool parameters, provider selection, and failover behavior.

Video generation

The bundled openai plugin registers video generation through the video_generate tool.

CapabilityValue
Default modelopenai/sora-2
ModesText-to-video, image-to-video, single-video edit
Reference inputs1 image or 1 video
Size overridesSupported
Other overridesaspectRatio, resolution, audio, watermark are ignored with a tool warning
{
  agents: {
    defaults: {
      videoGenerationModel: { primary: "openai/sora-2" },
    },
  },
}
NOTE

See Video Generation for shared tool parameters, provider selection, and failover behavior.

Personality overlay

OpenClaw adds a small OpenAI-specific prompt overlay for openai/* and openai-codex/* runs. The overlay keeps the assistant warm, collaborative, concise, and a little more emotionally expressive without replacing the base system prompt.

ValueEffect
"friendly" (default)Enable the OpenAI-specific overlay
"on"Alias for "friendly"
"off"Use base OpenClaw prompt only
Config
{
  plugins: {
    entries: {
      openai: { config: { personality: "friendly" } },
    },
  },
}
CLI
openclaw config set plugins.entries.openai.config.personality off
TIP

Values are case-insensitive at runtime, so "Off" and "off" both disable the overlay.

Voice and speech

Speech synthesis (TTS)

The bundled openai plugin registers speech synthesis for the messages.tts surface.

SettingConfig pathDefault
Modelmessages.tts.providers.openai.modelgpt-4o-mini-tts
Voicemessages.tts.providers.openai.voicecoral
Speedmessages.tts.providers.openai.speed(unset)
Instructionsmessages.tts.providers.openai.instructions(unset, gpt-4o-mini-tts only)
Formatmessages.tts.providers.openai.responseFormatopus for voice notes, mp3 for files
API keymessages.tts.providers.openai.apiKeyFalls back to OPENAI_API_KEY
Base URLmessages.tts.providers.openai.baseUrlhttps://api.openai.com/v1

Available models: gpt-4o-mini-tts, tts-1, tts-1-hd. Available voices: alloy, ash, ballad, cedar, coral, echo, fable, juniper, marin, onyx, nova, sage, shimmer, verse.

{
  messages: {
    tts: {
      providers: {
        openai: { model: "gpt-4o-mini-tts", voice: "coral" },
      },
    },
  },
}
NOTE

Set OPENAI_TTS_BASE_URL to override the TTS base URL without affecting the chat API endpoint.

Realtime transcription

The bundled openai plugin registers realtime transcription for the Voice Call plugin.

SettingConfig pathDefault
Modelplugins.entries.voice-call.config.streaming.providers.openai.modelgpt-4o-transcribe
Silence duration...openai.silenceDurationMs800
VAD threshold...openai.vadThreshold0.5
API key...openai.apiKeyFalls back to OPENAI_API_KEY
NOTE

Uses a WebSocket connection to wss://api.openai.com/v1/realtime with G.711 u-law audio.

Realtime voice

The bundled openai plugin registers realtime voice for the Voice Call plugin.

SettingConfig pathDefault
Modelplugins.entries.voice-call.config.realtime.providers.openai.modelgpt-realtime
Voice...openai.voicealloy
Temperature...openai.temperature0.8
VAD threshold...openai.vadThreshold0.5
Silence duration...openai.silenceDurationMs500
API key...openai.apiKeyFalls back to OPENAI_API_KEY
NOTE

Supports Azure OpenAI via azureEndpoint and azureDeployment config keys. Supports bidirectional tool calling. Uses G.711 u-law audio format.

Advanced configuration

Transport (WebSocket vs SSE)

OpenClaw uses WebSocket-first with SSE fallback ("auto") for both openai/* and openai-codex/*.

In "auto" mode, OpenClaw:

  • Retries one early WebSocket failure before falling back to SSE
  • After a failure, marks WebSocket as degraded for ~60 seconds and uses SSE during cool-down
  • Attaches stable session and turn identity headers for retries and reconnects
  • Normalizes usage counters (input_tokens / prompt_tokens) across transport variants
ValueBehavior
"auto" (default)WebSocket first, SSE fallback
"sse"Force SSE only
"websocket"Force WebSocket only
{
  agents: {
    defaults: {
      models: {
        "openai-codex/gpt-5.4": {
          params: { transport: "auto" },
        },
      },
    },
  },
}

Related OpenAI docs:

WebSocket warm-up

OpenClaw enables WebSocket warm-up by default for openai/* to reduce first-turn latency.

// Disable warm-up
{
  agents: {
    defaults: {
      models: {
        "openai/gpt-5.4": {
          params: { openaiWsWarmup: false },
        },
      },
    },
  },
}
Fast mode

OpenClaw exposes a shared fast-mode toggle for both openai/* and openai-codex/*:

  • Chat/UI: /fast status|on|off
  • Config: agents.defaults.models["<provider>/<model>"].params.fastMode

When enabled, OpenClaw maps fast mode to OpenAI priority processing (service_tier = "priority"). Existing service_tier values are preserved, and fast mode does not rewrite reasoning or text.verbosity.

{
  agents: {
    defaults: {
      models: {
        "openai/gpt-5.4": { params: { fastMode: true } },
        "openai-codex/gpt-5.4": { params: { fastMode: true } },
      },
    },
  },
}
NOTE

Session overrides win over config. Clearing the session override in the Sessions UI returns the session to the configured default.

Priority processing (service_tier)

OpenAI's API exposes priority processing via service_tier. Set it per model in OpenClaw:

{
  agents: {
    defaults: {
      models: {
        "openai/gpt-5.4": { params: { serviceTier: "priority" } },
        "openai-codex/gpt-5.4": { params: { serviceTier: "priority" } },
      },
    },
  },
}

Supported values: auto, default, flex, priority.

WARNING

serviceTier is only forwarded to native OpenAI endpoints (api.openai.com) and native Codex endpoints (chatgpt.com/backend-api). If you route either provider through a proxy, OpenClaw leaves service_tier untouched.

Server-side compaction (Responses API)

For direct OpenAI Responses models (openai/* on api.openai.com), OpenClaw auto-enables server-side compaction:

  • Forces store: true (unless model compat sets supportsStore: false)
  • Injects context_management: [{ type: "compaction", compact_threshold: ... }]
  • Default compact_threshold: 70% of contextWindow (or 80000 when unavailable)
Enable explicitly

Useful for compatible endpoints like Azure OpenAI Responses:

{
  agents: {
    defaults: {
      models: {
        "azure-openai-responses/gpt-5.4": {
          params: { responsesServerCompaction: true },
        },
      },
    },
  },
}
Custom threshold
{
  agents: {
    defaults: {
      models: {
        "openai/gpt-5.4": {
          params: {
            responsesServerCompaction: true,
            responsesCompactThreshold: 120000,
          },
        },
      },
    },
  },
}
Disable
{
  agents: {
    defaults: {
      models: {
        "openai/gpt-5.4": {
          params: { responsesServerCompaction: false },
        },
      },
    },
  },
}
NOTE

responsesServerCompaction only controls context_management injection. Direct OpenAI Responses models still force store: true unless compat sets supportsStore: false.

Strict-agentic GPT mode

For GPT-5-family runs on openai/* and openai-codex/*, OpenClaw can use a stricter embedded execution contract:

{
  agents: {
    defaults: {
      embeddedPi: { executionContract: "strict-agentic" },
    },
  },
}

With strict-agentic, OpenClaw:

  • No longer treats a plan-only turn as successful progress when a tool action is available
  • Retries the turn with an act-now steer
  • Auto-enables update_plan for substantial work
  • Surfaces an explicit blocked state if the model keeps planning without acting
NOTE

Scoped to OpenAI and Codex GPT-5-family runs only. Other providers and older model families keep default behavior.

Native vs OpenAI-compatible routes

OpenClaw treats direct OpenAI, Codex, and Azure OpenAI endpoints differently from generic OpenAI-compatible /v1 proxies:

Native routes (openai/*, openai-codex/*, Azure OpenAI):

  • Keep reasoning: { effort: "none" } intact when reasoning is explicitly disabled
  • Default tool schemas to strict mode
  • Attach hidden attribution headers on verified native hosts only
  • Keep OpenAI-only request shaping (service_tier, store, reasoning-compat, prompt-cache hints)

Proxy/compatible routes:

  • Use looser compat behavior
  • Do not force strict tool schemas or native-only headers

Azure OpenAI uses native transport and compat behavior but does not receive the hidden attribution headers.