OpenClaw Reference (Mirrored)

vLLM

Mirrored from OpenClaw (MIT)
This mirror is provided for convenience. OpenClawdBots is not affiliated with or endorsed by OpenClaw.

vLLM

vLLM can serve open-source (and some custom) models via an OpenAI-compatible HTTP API. OpenClaw connects to vLLM using the openai-completions API.

OpenClaw can also auto-discover available models from vLLM when you opt in with VLLM_API_KEY (any value works if your server does not enforce auth) and you do not define an explicit models.providers.vllm entry.

PropertyValue
Provider IDvllm
APIopenai-completions (OpenAI-compatible)
AuthVLLM_API_KEY environment variable
Default base URLhttp://127.0.0.1:8000/v1

Getting started

  1. Start vLLM with an OpenAI-compatible server

    Your base URL should expose /v1 endpoints (e.g. /v1/models, /v1/chat/completions). vLLM commonly runs on:

    http://127.0.0.1:8000/v1
    
  2. Set the API key environment variable

    Any value works if your server does not enforce auth:

    export VLLM_API_KEY="vllm-local"
    
  3. Select a model

    Replace with one of your vLLM model IDs:

    {
      agents: {
        defaults: {
          model: { primary: "vllm/your-model-id" },
        },
      },
    }
    
  4. Verify the model is available
    openclaw models list --provider vllm
    

Model discovery (implicit provider)

When VLLM_API_KEY is set (or an auth profile exists) and you do not define models.providers.vllm, OpenClaw queries:

GET http://127.0.0.1:8000/v1/models

and converts the returned IDs into model entries.

NOTE

If you set models.providers.vllm explicitly, auto-discovery is skipped and you must define models manually.

Explicit configuration (manual models)

Use explicit config when:

  • vLLM runs on a different host or port
  • You want to pin contextWindow or maxTokens values
  • Your server requires a real API key (or you want to control headers)
{
  models: {
    providers: {
      vllm: {
        baseUrl: "http://127.0.0.1:8000/v1",
        apiKey: "${VLLM_API_KEY}",
        api: "openai-completions",
        models: [
          {
            id: "your-model-id",
            name: "Local vLLM Model",
            reasoning: false,
            input: ["text"],
            cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
            contextWindow: 128000,
            maxTokens: 8192,
          },
        ],
      },
    },
  },
}

Advanced notes

Proxy-style behavior

vLLM is treated as a proxy-style OpenAI-compatible /v1 backend, not a native OpenAI endpoint. This means:

BehaviorApplied?
Native OpenAI request shapingNo
service_tierNot sent
Responses storeNot sent
Prompt-cache hintsNot sent
OpenAI reasoning-compat payload shapingNot applied
Hidden OpenClaw attribution headersNot injected on custom base URLs
Custom base URL

If your vLLM server runs on a non-default host or port, set baseUrl in the explicit provider config:

{
  models: {
    providers: {
      vllm: {
        baseUrl: "http://192.168.1.50:9000/v1",
        apiKey: "${VLLM_API_KEY}",
        api: "openai-completions",
        models: [
          {
            id: "my-custom-model",
            name: "Remote vLLM Model",
            reasoning: false,
            input: ["text"],
            contextWindow: 64000,
            maxTokens: 4096,
          },
        ],
      },
    },
  },
}

Troubleshooting

Server not reachable

Check that the vLLM server is running and accessible:

curl http://127.0.0.1:8000/v1/models

If you see a connection error, verify the host, port, and that vLLM started with the OpenAI-compatible server mode.

Auth errors on requests

If requests fail with auth errors, set a real VLLM_API_KEY that matches your server configuration, or configure the provider explicitly under models.providers.vllm.

TIP

If your vLLM server does not enforce auth, any non-empty value for VLLM_API_KEY works as an opt-in signal for OpenClaw.

No models discovered

Auto-discovery requires VLLM_API_KEY to be set and no explicit models.providers.vllm config entry. If you have defined the provider manually, OpenClaw skips discovery and uses only your declared models.

WARNING

More help: Troubleshooting and FAQ.