OpenClaw × Nowledge Mem
Set up OpenClaw with lossless session memory and shared cross-tool memory in 5 minutes.
openclaw plugins install @nowledge/openclaw-nowledge-memOnce configured, OpenClaw keeps every OpenClaw conversation as a searchable thread, can distill important decisions into linked memories, and can also recall the knowledge you already captured from your other AI tools, documents, and imported threads.
Behind that, Nowledge Mem is doing more than storing notes. It links related knowledge into a graph, tracks how ideas evolve, and can keep processing in the background so OpenClaw can benefit from daily briefings, contradiction checks, and crystals built from multiple sources.
Your first success state
The fastest proof is simple: remember one fact, recall it in a fresh session, then confirm the session itself shows up as a searchable thread.
Let an AI Agent Do the Setup
If you want OpenClaw or another AI agent to handle the setup for you, give it this:
Read https://nowled.ge/openclaw-skill and follow it to install, configure, verify, and explain Nowledge Mem for OpenClaw.That guide is for AI agents, not for humans. It handles local vs remote mode, optional API auth, trust pinning, restart, verification, and what to do next.
Before You Start
You need:
- Nowledge Mem running locally (installation)
- OpenClaw 2026.3.7 or later (OpenClaw getting started)
nmemCLI on your PATH. In Nowledge Mem, go to Settings > Preferences > Developer Tools and click Install CLI. Or install standalone:pip install nmem-cli
nmem status # should show Nowledge Mem is running
openclaw --versionSetup
Install the plugin
openclaw plugins install @nowledge/openclaw-nowledge-memThe installer enables the plugin and switches OpenClaw's memory slot to openclaw-nowledge-mem automatically.
To update to the latest version:
openclaw plugins update openclaw-nowledge-memOptional but recommended: pin trust for non-bundled plugins
If OpenClaw warns that plugins.allow is empty, add this:
{
"plugins": {
"allow": ["openclaw-nowledge-mem"]
}
}If you also use linked or workspace copies, review plugins.load.paths too. OpenClaw allowlists plugin ids, not install provenance.
Restart OpenClaw and verify
openclaw nowledge-mem statusIf Nowledge Mem is reachable, you're done.
If you manage OpenClaw config manually instead of using openclaw plugins install, make sure plugins.slots.memory is openclaw-nowledge-mem and plugins.entries.openclaw-nowledge-mem.enabled is true.
For local mode, no API key is needed. If you are connecting to a remote Nowledge Mem server instead, set apiUrl, and add apiKey when that server has auth enabled.
Verify It Works (1 Minute)
In OpenClaw chat:
/remember We chose PostgreSQL for task events/recall PostgreSQL- should find it immediately/new- start a fresh session- Ask:
What database did we choose for task events?- it remembers across sessions - Ask:
What was I working on this week?- weekly activity view - Ask:
What was I doing on February 17?- down to the exact day /forget PostgreSQL task events- clean deletion
If all seven steps work, the memory system is fully running.
What You Can Do
Keep every OpenClaw session, not just a summary
Every OpenClaw conversation is captured as a real thread you can search later. When a session contains decisions, learnings, or preferences worth keeping, Nowledge Mem can distill them into structured memories that still point back to the source conversation through sourceThreadId.
Use graph-based memory, not a flat archive
Memories are linked to related entities, earlier and later versions of the same idea, and the source conversations they came from. That means OpenClaw can do more than keyword recall. It can trace how a decision changed, follow connected topics, and explain where an answer came from.
Let knowledge improve in the background
When Background Intelligence is enabled in Nowledge Mem, the system keeps working after the session ends: deduplicating overlap, surfacing contradictions, writing Working Memory briefings, and creating crystals when several memories converge into something worth keeping. OpenClaw can read those outputs the next time you work.
Remember anything, forever
Tell the AI /remember We decided against microservices, the team is too small. Next week, in a different session, ask "what was that decision about microservices?" It finds it.
Browse your work by date
Ask "what was I doing last Tuesday?" and the AI lists everything you saved, documents you added, and insights generated that day. You can ask for a specific date, not just "the past N days."
Bring the rest of your AI work into OpenClaw
What you learned in Claude, decided in Cursor, captured from browser chats, or imported from past threads can all become part of the same memory layer. OpenClaw is not an island. It is one connected path in a larger system.
Trace a decision's history
Ask the AI "how did this idea develop?" and it shows you: the original source documents that informed it, which related memories were synthesized into a higher-level insight, and how your understanding changed over time.
Optional: start every session already in context
If you enable sessionContext, Background Intelligence's daily briefing and relevant memories are injected before the first response. That gives OpenClaw immediate context from turn one. In the default mode, the agent still gets memory tools and a short system hint, but it decides when to search.
Save knowledge with structure, not just text
When you ask the AI to remember something, it doesn't just store text. It records the type (decision, learning, preference, plan...), when it happened, and links it to related knowledge. Searching by type, by date, by topic all work because the structure is there.
Trace a memory to its source conversation
When a memory was distilled from a conversation, it includes a sourceThreadId. The agent can fetch the full conversation with nowledge_mem_thread_fetch to see the complete context: what was said, what was decided, and how the conclusion was reached.
Search past conversations directly
Ask "find the conversation where we discussed Redis caching" and the agent uses nowledge_mem_thread_search to find matching threads with message snippets. Then fetch full messages with nowledge_mem_thread_fetch for progressive retrieval of long conversations.
Slash commands: /remember, /recall, /forget
How It Works
Per-turn flow
Every time you send a message, the plugin injects behavioral guidance before the agent processes it. The agent then decides which tools to call.
The behavioral skill and always-on hook nudge the agent to search before answering and save after deciding. Here's when each tool fires:
| Scenario | Tool | What happens |
|---|---|---|
| User asks a question | memory_search | Search knowledge base before answering. Returns sourceThreadId when available. |
| Decision made, insight learned | nowledge_mem_save | Structured save: type + labels + temporal context. |
| "What was I doing last week?" | nowledge_mem_timeline | Activity feed grouped by day. Supports exact date ranges. |
| "How is X connected to Y?" | nowledge_mem_connections | Graph walk: edges, entities, EVOLVES chains, provenance. |
| Need today's focus/priorities | nowledge_mem_context | Read Working Memory daily briefing. |
Memory has sourceThreadId | nowledge_mem_thread_fetch | Fetch full source conversation with pagination. |
| "Find our discussion about X" | nowledge_mem_thread_search | Search past conversations by keyword. |
| "Forget X" | nowledge_mem_forget | Delete by ID or search query. |
| "Is my setup working?" | nowledge_mem_status | Show config, connectivity, and version. |
Session lifecycle (automatic capture)
When sessions end, conversations are automatically captured and optionally distilled into structured memories.
Key points:
- Thread capture is unconditional: every conversation is saved and searchable
- LLM distillation only runs at
agent_end, not during checkpoints - Distilled memories carry
sourceThreadId, linking them back to the source conversation
Progressive retrieval (memory to thread to messages)
Memories distilled from conversations carry a sourceThreadId. This creates a chain: search memories, trace to source conversation, read full messages with pagination.
Two entry points into past conversations:
- From a memory:
memory_searchormemory_getreturnssourceThreadId, then fetch the source conversation - Direct search:
nowledge_mem_thread_searchfinds conversations by keyword, then fetch any match
Three modes
The plugin supports three operating modes. Choose based on how much you want to guarantee versus how much token budget you're willing to spend.
| Mode | Config | Behavior | Token cost |
|---|---|---|---|
| Default (recommended) | sessionContext: false | Agent calls 10 tools on demand. Conversations captured + distilled at session end. | Lowest overhead. The agent decides when to search. |
| Session context | sessionContext: true | Working Memory + relevant memories injected at prompt time, plus all 10 tools still available. | Higher per-turn context cost, but context is present from turn one. |
| Minimal | sessionDigest: false | Tool-only, no automatic capture. | Small overhead from the always-on system hint only. |
Which mode should you use?
- Most users: start with default. The agent gets behavioral guidance nudging it to search before answering and save after deciding. It works well for most conversations.
- Short sessions or critical accuracy: enable
sessionContext. This guarantees relevant memories are present from the first turn. The agent doesn't need to decide whether to search. The tradeoff is a larger prompt on each turn. - Full manual control: set
sessionDigest: false. You control what gets saved (via/rememberornowledge_mem_save) and nothing is captured automatically.
sessionContext - Automatic context injection
When enabled, the plugin injects context at prompt time:
- Reads your Working Memory, the daily briefing Background Intelligence generates each morning
- Searches your knowledge graph for memories relevant to your current prompt
- Prepends the recalled material as run-specific context before the model answers, while the stable guidance stays in system-prompt space
The behavioral guidance automatically adjusts when sessionContext is on. It tells the agent that context has already been injected, so memory_search should only be used for specific follow-up queries, not broad recall. This prevents redundant searches for the same context.
Useful for giving the agent immediate context without waiting for it to search proactively. Best for short sessions and critical workflows where you want guaranteed recall.
sessionDigest - Thread capture + LLM distillation (default: on)
On by default. Two things happen at session lifecycle events (agent_end, after_compaction, before_reset):
1. Thread capture (always). The full conversation is appended to a persistent thread in Nowledge Mem. This happens unconditionally: every message is preserved, searchable via nowledge_mem_thread_search.
2. LLM distillation (when worthwhile). After thread capture, a lightweight LLM triage determines if the conversation contains save-worthy content (decisions, insights, preferences). If yes, a full distillation pass extracts structured memories with proper types, labels, and temporal data. Works in any language.
Context compaction: when OpenClaw compresses a long conversation, the plugin captures the transcript first. Nothing is lost.
Deduplication: thread appends are idempotent by message ID. No duplicates.
Common Questions
Does the agent always search before answering?
The plugin uses two layers to drive recall. First, a behavioral skill (auto-discovered by OpenClaw) teaches the agent when and how to use memory tools. Second, a short always-on system hint reminds it to search before answering questions about prior work, decisions, or preferences. In practice, modern LLMs follow this directive guidance reliably for knowledge-related questions. For messages that don't need past context, the agent skips the search, which is the right tradeoff. If guaranteed recall matters for your use case, enable sessionContext: true. That injects relevant memories at prompt time, before the agent processes your message.
What stops the agent from saving the same thing twice?
Two layers. First, the plugin checks for near-identical existing memories before every save. If a memory with very high similarity already exists, the save is skipped and the existing memory is returned instead. Second, Nowledge Mem's Background Intelligence runs in the background and handles deeper deduplication, identifying semantic overlap across memories and linking them via EVOLVES chains (replaces, enriches, confirms, or challenges). The plugin catches obvious duplicates; Background Intelligence catches subtle ones.
What happens to conversations I don't explicitly save?
With sessionDigest enabled (the default), every conversation is saved as a searchable thread. You can find it later with nowledge_mem_thread_search. A lightweight LLM triage also checks if the conversation contained decisions, insights, or preferences worth keeping as structured memories. If yes, they're extracted with proper types, labels, and temporal context. If the conversation was routine, nothing extra is saved.
Can memories become outdated?
Yes, and that's by design. Nowledge Mem's EVOLVES chains track how understanding changes: a newer memory can supersede, enrich, or challenge an older one. Background Intelligence identifies these relationships automatically. When you search, the relevance scoring considers recency, so newer memories rank higher by default.
Configuration
No config is needed for a normal npm install. The installer already enables the plugin and selects the memory slot.
To change settings, open the OpenClaw dashboard and go to Automation > Plugins. Under Plugin Entries, expand Nowledge Mem, then Nowledge Mem Config. You can also type "nowledge" in the search bar to jump straight there.

Changes take effect after restarting OpenClaw.
| Setting | Default | What it does |
|---|---|---|
| Session context injection | off | Inject Working Memory + relevant memories at prompt time |
| Session digest at end | on | Capture conversations + distill key memories at session end |
| Minimum digest interval | 300s | Seconds between session digests (0 = no limit) |
| Max context results | 5 | Memories to inject at prompt time (1-20) |
| Min recall score | 0 | Only inject memories scoring above this threshold (0-100%). 0 includes all results. |
| Max thread message chars | 800 | Characters kept per captured thread message (200-20000). Raise for long code or technical conversations. |
| Server URL | empty | Remote server URL (empty = local) |
| API key | empty | API key for remote access |
Remote access
To connect to a Nowledge Mem server on another machine, create ~/.nowledge-mem/config.json with your credentials — the same file used by nmem CLI, Bub, Claude Code, and other integrations:
{
"apiUrl": "https://<your-url>",
"apiKey": "nmem_..."
}You can also set Server URL and API key in the OpenClaw dashboard plugin settings. The API key is passed only through the process environment — it never appears in logs or command history. See Access Mem Anywhere.
Troubleshooting
Plugin is installed but OpenClaw isn't using it
Check that plugins.slots.memory is exactly openclaw-nowledge-mem, and that you restarted OpenClaw after editing the config.
plugins.allow is empty warning
This means OpenClaw found a non-bundled plugin without an explicit allowlist entry yet. If this is your npm-installed plugin, add:
{
"plugins": {
"allow": ["openclaw-nowledge-mem"]
}
}If you also use plugins.load.paths or linked workspace copies, review those paths too. OpenClaw allowlists ids, not install provenance.
"Duplicate plugin id detected" warning
This happens if you previously installed the plugin locally (e.g. with --link) and then installed from npm. OpenClaw is loading it from both places. Fix it by removing the local path from your config:
Open ~/.openclaw/openclaw.json and delete the plugins.load.paths entry that points to the local plugin directory:
"load": {
"paths": []
}Then restart OpenClaw. The warning will be gone and only the npm-installed version will load.
Status shows not responding
nmem status
curl -sS http://127.0.0.1:14242/healthSearch returns too few results
Raise maxContextResults to 8 or 12.
Why Nowledge Mem?
Other memory tools store what you said as text and retrieve it by semantic similarity. Nowledge Mem is different.
Knowledge has structure. Every memory knows what type it is (decision, learning, plan, preference), when it happened, which source documents it came from, and how it relates to other memories. That's what makes search precise and reasoning reliable.
Knowledge evolves. The understanding you wrote today connects to the updated version you saved three months later. You can see how your thinking changed, without losing the intermediate steps.
Knowledge has provenance. Every piece of knowledge extracted from a PDF, document, or web page links back to its source. When the AI says "based on your March design doc," you can verify it.
Knowledge travels across tools. What you learned in Cursor, saved in Claude, refined in ChatGPT, all available in OpenClaw. Your knowledge belongs to you, not to any one tool.
Local first, no cloud required. Your knowledge lives on your machine. Remote access is available when you need it, not imposed by default.
How search ranking works: Search & Relevance.
For Advanced Users
OpenClaw's MEMORY.md workspace file still works for workspace context. Memory tool calls are handled by Nowledge Mem, but both can coexist.
The plugin communicates with Nowledge Mem through the nmem CLI. Local and remote modes behave identically. Configure the address once and every tool call routes correctly.
Related
- Integrations overview - native integrations, reusable packages, MCP, and browser capture
- Claude Code · Claude Desktop · Codex CLI · Alma · Raycast · Other Chat AI
References
- Plugin source: nowledge-mem-openclaw-plugin
- OpenClaw docs: Plugin system
- Changelog: CHANGELOG.md