OpenClaw Reference (Mirrored)

Memory configuration reference

Mirrored from OpenClaw (MIT)
This mirror is provided for convenience. OpenClawdBots is not affiliated with or endorsed by OpenClaw.

Memory configuration reference

This page lists every configuration knob for OpenClaw memory search. For conceptual overviews, see:

All memory search settings live under agents.defaults.memorySearch in openclaw.json unless noted otherwise.

If you are looking for the active memory feature toggle and sub-agent config, that lives under plugins.entries.active-memory instead of memorySearch.

Active memory uses a two-gate model:

  1. the plugin must be enabled and target the current agent id
  2. the request must be an eligible interactive persistent chat session

See Active Memory for the activation model, plugin-owned config, transcript persistence, and safe rollout pattern.


Provider selection

KeyTypeDefaultDescription
providerstringauto-detectedEmbedding adapter ID: openai, gemini, voyage, mistral, bedrock, ollama, local
modelstringprovider defaultEmbedding model name
fallbackstring"none"Fallback adapter ID when the primary fails
enabledbooleantrueEnable or disable memory search

Auto-detection order

When provider is not set, OpenClaw selects the first available:

  1. local -- if memorySearch.local.modelPath is configured and the file exists.
  2. openai -- if an OpenAI key can be resolved.
  3. gemini -- if a Gemini key can be resolved.
  4. voyage -- if a Voyage key can be resolved.
  5. mistral -- if a Mistral key can be resolved.
  6. bedrock -- if the AWS SDK credential chain resolves (instance role, access keys, profile, SSO, web identity, or shared config).

ollama is supported but not auto-detected (set it explicitly).

API key resolution

Remote embeddings require an API key. Bedrock uses the AWS SDK default credential chain instead (instance roles, SSO, access keys).

ProviderEnv varConfig key
OpenAIOPENAI_API_KEYmodels.providers.openai.apiKey
GeminiGEMINI_API_KEYmodels.providers.google.apiKey
VoyageVOYAGE_API_KEYmodels.providers.voyage.apiKey
MistralMISTRAL_API_KEYmodels.providers.mistral.apiKey
BedrockAWS credential chainNo API key needed
OllamaOLLAMA_API_KEY (placeholder)--

Codex OAuth covers chat/completions only and does not satisfy embedding requests.


Remote endpoint config

For custom OpenAI-compatible endpoints or overriding provider defaults:

KeyTypeDescription
remote.baseUrlstringCustom API base URL
remote.apiKeystringOverride API key
remote.headersobjectExtra HTTP headers (merged with provider defaults)
{
  agents: {
    defaults: {
      memorySearch: {
        provider: "openai",
        model: "text-embedding-3-small",
        remote: {
          baseUrl: "https://api.example.com/v1/",
          apiKey: "YOUR_KEY",
        },
      },
    },
  },
}

Gemini-specific config

KeyTypeDefaultDescription
modelstringgemini-embedding-001Also supports gemini-embedding-2-preview
outputDimensionalitynumber3072For Embedding 2: 768, 1536, or 3072
WARNING

Changing model or outputDimensionality triggers an automatic full reindex.


Bedrock embedding config

Bedrock uses the AWS SDK default credential chain -- no API keys needed. If OpenClaw runs on EC2 with a Bedrock-enabled instance role, just set the provider and model:

{
  agents: {
    defaults: {
      memorySearch: {
        provider: "bedrock",
        model: "amazon.titan-embed-text-v2:0",
      },
    },
  },
}
KeyTypeDefaultDescription
modelstringamazon.titan-embed-text-v2:0Any Bedrock embedding model ID
outputDimensionalitynumbermodel defaultFor Titan V2: 256, 512, or 1024

Supported models

The following models are supported (with family detection and dimension defaults):

Model IDProviderDefault DimsConfigurable Dims
amazon.titan-embed-text-v2:0Amazon1024256, 512, 1024
amazon.titan-embed-text-v1Amazon1536--
amazon.titan-embed-g1-text-02Amazon1536--
amazon.titan-embed-image-v1Amazon1024--
amazon.nova-2-multimodal-embeddings-v1:0Amazon1024256, 384, 1024, 3072
cohere.embed-english-v3Cohere1024--
cohere.embed-multilingual-v3Cohere1024--
cohere.embed-v4:0Cohere1536256-1536
twelvelabs.marengo-embed-3-0-v1:0TwelveLabs512--
twelvelabs.marengo-embed-2-7-v1:0TwelveLabs1024--

Throughput-suffixed variants (e.g., amazon.titan-embed-text-v1:2:8k) inherit the base model's configuration.

Authentication

Bedrock auth uses the standard AWS SDK credential resolution order:

  1. Environment variables (AWS_ACCESS_KEY_ID + AWS_SECRET_ACCESS_KEY)
  2. SSO token cache
  3. Web identity token credentials
  4. Shared credentials and config files
  5. ECS or EC2 metadata credentials

Region is resolved from AWS_REGION, AWS_DEFAULT_REGION, the amazon-bedrock provider baseUrl, or defaults to us-east-1.

IAM permissions

The IAM role or user needs:

{
  "Effect": "Allow",
  "Action": "bedrock:InvokeModel",
  "Resource": "*"
}

For least-privilege, scope InvokeModel to the specific model:

arn:aws:bedrock:*::foundation-model/amazon.titan-embed-text-v2:0

Local embedding config

KeyTypeDefaultDescription
local.modelPathstringauto-downloadedPath to GGUF model file
local.modelCacheDirstringnode-llama-cpp defaultCache dir for downloaded models

Default model: embeddinggemma-300m-qat-Q8_0.gguf (~0.6 GB, auto-downloaded). Requires native build: pnpm approve-builds then pnpm rebuild node-llama-cpp.


Hybrid search config

All under memorySearch.query.hybrid:

KeyTypeDefaultDescription
enabledbooleantrueEnable hybrid BM25 + vector search
vectorWeightnumber0.7Weight for vector scores (0-1)
textWeightnumber0.3Weight for BM25 scores (0-1)
candidateMultipliernumber4Candidate pool size multiplier

MMR (diversity)

KeyTypeDefaultDescription
mmr.enabledbooleanfalseEnable MMR re-ranking
mmr.lambdanumber0.70 = max diversity, 1 = max relevance

Temporal decay (recency)

KeyTypeDefaultDescription
temporalDecay.enabledbooleanfalseEnable recency boost
temporalDecay.halfLifeDaysnumber30Score halves every N days

Evergreen files (MEMORY.md, non-dated files in memory/) are never decayed.

Full example

{
  agents: {
    defaults: {
      memorySearch: {
        query: {
          hybrid: {
            vectorWeight: 0.7,
            textWeight: 0.3,
            mmr: { enabled: true, lambda: 0.7 },
            temporalDecay: { enabled: true, halfLifeDays: 30 },
          },
        },
      },
    },
  },
}

Additional memory paths

KeyTypeDescription
extraPathsstring[]Additional directories or files to index
{
  agents: {
    defaults: {
      memorySearch: {
        extraPaths: ["../team-docs", "/srv/shared-notes"],
      },
    },
  },
}

Paths can be absolute or workspace-relative. Directories are scanned recursively for .md files. Symlink handling depends on the active backend: the builtin engine ignores symlinks, while QMD follows the underlying QMD scanner behavior.

For agent-scoped cross-agent transcript search, use agents.list[].memorySearch.qmd.extraCollections instead of memory.qmd.paths. Those extra collections follow the same { path, name, pattern? } shape, but they are merged per agent and can preserve explicit shared names when the path points outside the current workspace. If the same resolved path appears in both memory.qmd.paths and memorySearch.qmd.extraCollections, QMD keeps the first entry and skips the duplicate.


Multimodal memory (Gemini)

Index images and audio alongside Markdown using Gemini Embedding 2:

KeyTypeDefaultDescription
multimodal.enabledbooleanfalseEnable multimodal indexing
multimodal.modalitiesstring[]--["image"], ["audio"], or ["all"]
multimodal.maxFileBytesnumber10000000Max file size for indexing

Only applies to files in extraPaths. Default memory roots stay Markdown-only. Requires gemini-embedding-2-preview. fallback must be "none".

Supported formats: .jpg, .jpeg, .png, .webp, .gif, .heic, .heif (images); .mp3, .wav, .ogg, .opus, .m4a, .aac, .flac (audio).


Embedding cache

KeyTypeDefaultDescription
cache.enabledbooleanfalseCache chunk embeddings in SQLite
cache.maxEntriesnumber50000Max cached embeddings

Prevents re-embedding unchanged text during reindex or transcript updates.


Batch indexing

KeyTypeDefaultDescription
remote.batch.enabledbooleanfalseEnable batch embedding API
remote.batch.concurrencynumber2Parallel batch jobs
remote.batch.waitbooleantrueWait for batch completion
remote.batch.pollIntervalMsnumber--Poll interval
remote.batch.timeoutMinutesnumber--Batch timeout

Available for openai, gemini, and voyage. OpenAI batch is typically fastest and cheapest for large backfills.


Session memory search (experimental)

Index session transcripts and surface them via memory_search:

KeyTypeDefaultDescription
experimental.sessionMemorybooleanfalseEnable session indexing
sourcesstring[]["memory"]Add "sessions" to include transcripts
sync.sessions.deltaBytesnumber100000Byte threshold for reindex
sync.sessions.deltaMessagesnumber50Message threshold for reindex

Session indexing is opt-in and runs asynchronously. Results can be slightly stale. Session logs live on disk, so treat filesystem access as the trust boundary.


SQLite vector acceleration (sqlite-vec)

KeyTypeDefaultDescription
store.vector.enabledbooleantrueUse sqlite-vec for vector queries
store.vector.extensionPathstringbundledOverride sqlite-vec path

When sqlite-vec is unavailable, OpenClaw falls back to in-process cosine similarity automatically.


Index storage

KeyTypeDefaultDescription
store.pathstring~/.openclaw/memory/{agentId}.sqliteIndex location (supports {agentId} token)
store.fts.tokenizerstringunicode61FTS5 tokenizer (unicode61 or trigram)

QMD backend config

Set memory.backend = "qmd" to enable. All QMD settings live under memory.qmd:

KeyTypeDefaultDescription
commandstringqmdQMD executable path
searchModestringsearchSearch command: search, vsearch, query
includeDefaultMemorybooleantrueAuto-index MEMORY.md + memory/**/*.md
paths[]array--Extra paths: { name, path, pattern? }
sessions.enabledbooleanfalseIndex session transcripts
sessions.retentionDaysnumber--Transcript retention
sessions.exportDirstring--Export directory

OpenClaw prefers the current QMD collection and MCP query shapes, but keeps older QMD releases working by falling back to legacy --mask collection flags and older MCP tool names when needed.

QMD model overrides stay on the QMD side, not OpenClaw config. If you need to override QMD's models globally, set environment variables such as QMD_EMBED_MODEL, QMD_RERANK_MODEL, and QMD_GENERATE_MODEL in the gateway runtime environment.

Update schedule

KeyTypeDefaultDescription
update.intervalstring5mRefresh interval
update.debounceMsnumber15000Debounce file changes
update.onBootbooleantrueRefresh on startup
update.waitForBootSyncbooleanfalseBlock startup until refresh completes
update.embedIntervalstring--Separate embed cadence
update.commandTimeoutMsnumber--Timeout for QMD commands
update.updateTimeoutMsnumber--Timeout for QMD update operations
update.embedTimeoutMsnumber--Timeout for QMD embed operations

Limits

KeyTypeDefaultDescription
limits.maxResultsnumber6Max search results
limits.maxSnippetCharsnumber--Clamp snippet length
limits.maxInjectedCharsnumber--Clamp total injected chars
limits.timeoutMsnumber4000Search timeout

Scope

Controls which sessions can receive QMD search results. Same schema as session.sendPolicy:

{
  memory: {
    qmd: {
      scope: {
        default: "deny",
        rules: [{ action: "allow", match: { chatType: "direct" } }],
      },
    },
  },
}

The shipped default allows direct and channel sessions, while still denying groups.

Default is DM-only. match.keyPrefix matches the normalized session key; match.rawKeyPrefix matches the raw key including agent:<id>:.

Citations

memory.citations applies to all backends:

ValueBehavior
auto (default)Include Source: <path#line> footer in snippets
onAlways include footer
offOmit footer (path still passed to agent internally)

Full QMD example

{
  memory: {
    backend: "qmd",
    citations: "auto",
    qmd: {
      includeDefaultMemory: true,
      update: { interval: "5m", debounceMs: 15000 },
      limits: { maxResults: 6, timeoutMs: 4000 },
      scope: {
        default: "deny",
        rules: [{ action: "allow", match: { chatType: "direct" } }],
      },
      paths: [{ name: "docs", path: "~/notes", pattern: "**/*.md" }],
    },
  },
}

Dreaming (experimental)

Dreaming is configured under plugins.entries.memory-core.config.dreaming, not under agents.defaults.memorySearch.

Dreaming runs as one scheduled sweep and uses internal light/deep/REM phases as an implementation detail.

For conceptual behavior and slash commands, see Dreaming.

User settings

KeyTypeDefaultDescription
enabledbooleanfalseEnable or disable dreaming entirely
frequencystring0 3 * * *Optional cron cadence for the full dreaming sweep

Example

{
  plugins: {
    entries: {
      "memory-core": {
        config: {
          dreaming: {
            enabled: true,
            frequency: "0 3 * * *",
          },
        },
      },
    },
  },
}

Notes:

  • Dreaming writes machine state to memory/.dreams/.
  • Dreaming writes human-readable narrative output to DREAMS.md (or existing dreams.md).
  • The light/deep/REM phase policy and thresholds are internal behavior, not user-facing config.