OpenClaw × Nowledge Mem
Set up OpenClaw with persistent, cross-session memory in 5 minutes.
openclaw plugins install @nowledge/openclaw-nowledge-memOnce configured, your OpenClaw agent remembers what you said in the last session, the decision you made last week, and the knowledge you wrote into a document three months ago.
Before You Start
You need:
- Nowledge Mem running locally (installation)
- OpenClaw installed (OpenClaw getting started)
nmemCLI on your PATH. In Nowledge Mem, go to Settings > Preferences > Developer Tools and click Install CLI. Or install standalone:pip install nmem-cli
nmem status # should show Nowledge Mem is running
openclaw --versionSetup
Install the plugin
openclaw plugins install @nowledge/openclaw-nowledge-memTo update to the latest version:
openclaw plugins update openclaw-nowledge-memEnable the plugin in OpenClaw config
Open ~/.openclaw/openclaw.json and add:
{
"plugins": {
"slots": {
"memory": "openclaw-nowledge-mem"
},
"entries": {
"openclaw-nowledge-mem": {
"enabled": true
}
}
}
}Settings are in OpenClaw's Config > Plugins > Plugin Entries. See Configuration for details.
Restart OpenClaw and verify
openclaw nowledge-mem statusIf Nowledge Mem is reachable, you're done.
Verify It Works (1 Minute)
In OpenClaw chat:
/remember We chose PostgreSQL for task events/recall PostgreSQL- should find it immediately/new- start a fresh session- Ask:
What database did we choose for task events?- it remembers across sessions - Ask:
What was I working on this week?- weekly activity view - Ask:
What was I doing on February 17?- down to the exact day /forget PostgreSQL task events- clean deletion
If all seven steps work, the memory system is fully running.
What You Can Do
Remember anything, forever
Tell the AI /remember We decided against microservices, the team is too small. Next week, in a different session, ask "what was that decision about microservices?" It finds it.
Browse your work by date
Ask "what was I doing last Tuesday?" and the AI lists everything you saved, documents you added, and insights generated that day. You can ask for a specific date, not just "the past N days."
Trace a decision's history
Ask the AI "how did this idea develop?" and it shows you: the original source documents that informed it, which related memories were synthesized into a higher-level insight, and how your understanding changed over time.
Start every session already in context
Every morning, the Knowledge Agent produces a daily briefing: what you're focused on, open questions, recent changes. Your agent reads it at the start of every session. You never repeat yourself.
Save knowledge with structure, not just text
When you ask the AI to remember something, it doesn't just store text. It records the type (decision, learning, preference, plan...), when it happened, and links it to related knowledge. Searching by type, by date, by topic all work because the structure is there.
Trace a memory to its source conversation
When a memory was distilled from a conversation, it includes a sourceThreadId. The agent can fetch the full conversation with nowledge_mem_thread_fetch to see the complete context: what was said, what was decided, and how the conclusion was reached.
Search past conversations directly
Ask "find the conversation where we discussed Redis caching" and the agent uses nowledge_mem_thread_search to find matching threads with message snippets. Then fetch full messages with nowledge_mem_thread_fetch for progressive retrieval of long conversations.
Slash commands: /remember, /recall, /forget
How It Works
Per-turn flow
Every time you send a message, the plugin injects behavioral guidance before the agent processes it. The agent then decides which tools to call.
The behavioral hook nudges the agent to search before answering and save after deciding. Here's when each tool fires:
| Scenario | Tool | What happens |
|---|---|---|
| User asks a question | memory_search | Search knowledge base before answering. Returns sourceThreadId when available. |
| Decision made, insight learned | nowledge_mem_save | Structured save: type + labels + temporal context. |
| "What was I doing last week?" | nowledge_mem_timeline | Activity feed grouped by day. Supports exact date ranges. |
| "How is X connected to Y?" | nowledge_mem_connections | Graph walk: edges, entities, EVOLVES chains, provenance. |
| Need today's focus/priorities | nowledge_mem_context | Read Working Memory daily briefing. |
Memory has sourceThreadId | nowledge_mem_thread_fetch | Fetch full source conversation with pagination. |
| "Find our discussion about X" | nowledge_mem_thread_search | Search past conversations by keyword. |
| "Forget X" | nowledge_mem_forget | Delete by ID or search query. |
| "Is my setup working?" | nowledge_mem_status | Show config, connectivity, and version. |
Session lifecycle (automatic capture)
When sessions end, conversations are automatically captured and optionally distilled into structured memories.
Key points:
- Thread capture is unconditional: every conversation is saved and searchable
- LLM distillation only runs at
agent_end, not during checkpoints - Distilled memories carry
sourceThreadId, linking them back to the source conversation
Progressive retrieval (memory to thread to messages)
Memories distilled from conversations carry a sourceThreadId. This creates a chain: search memories, trace to source conversation, read full messages with pagination.
Two entry points into past conversations:
- From a memory:
memory_searchormemory_getreturnssourceThreadId, then fetch the source conversation - Direct search:
nowledge_mem_thread_searchfinds conversations by keyword, then fetch any match
Three modes
The plugin supports three operating modes. Choose based on how much you want to guarantee versus how much token budget you're willing to spend.
| Mode | Config | Behavior | Token cost |
|---|---|---|---|
| Default (recommended) | sessionContext: false | Agent calls 10 tools on demand. Conversations captured + distilled at session end. | ~50 tokens/turn (guidance) + cheap triage per session |
| Session context | sessionContext: true | Working Memory + relevant memories injected at prompt time, plus all 10 tools still available. | ~1-2KB per prompt |
| Minimal | sessionDigest: false | Tool-only, no automatic capture. | ~50 tokens/turn (guidance only) |
Which mode should you use?
- Most users: start with default. The agent gets behavioral guidance nudging it to search before answering and save after deciding. It works well for most conversations.
- Short sessions or critical accuracy: enable
sessionContext. This guarantees relevant memories are present from the first turn. The agent doesn't need to decide whether to search. The tradeoff is ~1-2 KB of context per turn. - Full manual control: set
sessionDigest: false. You control what gets saved (via/rememberornowledge_mem_save) and nothing is captured automatically.
sessionContext - Automatic context injection
When enabled, the plugin injects context at prompt time:
- Reads your Working Memory, the daily briefing the Knowledge Agent generates each morning
- Searches your knowledge graph for memories relevant to your current prompt
- Prepends both as invisible context to the system prompt
The behavioral guidance automatically adjusts when sessionContext is on. It tells the agent that context has already been injected, so memory_search should only be used for specific follow-up queries, not broad recall. This prevents redundant searches for the same context.
Useful for giving the agent immediate context without waiting for it to search proactively. Best for short sessions and critical workflows where you want guaranteed recall.
sessionDigest - Thread capture + LLM distillation (default: on)
On by default. Two things happen at session lifecycle events (agent_end, after_compaction, before_reset):
1. Thread capture (always). The full conversation is appended to a persistent thread in Nowledge Mem. This happens unconditionally: every message is preserved, searchable via nowledge_mem_thread_search.
2. LLM distillation (when worthwhile). After thread capture, a lightweight LLM triage determines if the conversation contains save-worthy content (decisions, insights, preferences). If yes, a full distillation pass extracts structured memories with proper types, labels, and temporal data. Works in any language.
Context compaction: when OpenClaw compresses a long conversation, the plugin captures the transcript first. Nothing is lost.
Deduplication: thread appends are idempotent by message ID. No duplicates.
Common Questions
Does the agent always search before answering?
The behavioral guidance nudges the agent to search, but doesn't force it. This is deliberate: forcing a search on every turn would add latency and cost for messages that don't need past context. In practice, modern LLMs follow behavioral guidance reliably for knowledge-related questions. If guaranteed recall matters for your use case, enable sessionContext: true. That injects relevant memories at prompt time, before the agent processes your message.
What stops the agent from saving the same thing twice?
Two layers. First, the plugin checks for near-identical existing memories before every save. If a memory with very high similarity already exists, the save is skipped and the existing memory is returned instead. Second, Nowledge Mem's Knowledge Agent runs in the background and handles deeper deduplication, identifying semantic overlap across memories and linking them via EVOLVES chains (replaces, enriches, confirms, or challenges). The plugin catches obvious duplicates; the Knowledge Agent catches subtle ones.
What happens to conversations I don't explicitly save?
With sessionDigest enabled (the default), every conversation is saved as a searchable thread. You can find it later with nowledge_mem_thread_search. A lightweight LLM triage also checks if the conversation contained decisions, insights, or preferences worth keeping as structured memories. If yes, they're extracted with proper types, labels, and temporal context. If the conversation was routine, nothing extra is saved.
Can memories become outdated?
Yes, and that's by design. Nowledge Mem's EVOLVES chains track how understanding changes: a newer memory can supersede, enrich, or challenge an older one. The Knowledge Agent identifies these relationships automatically. When you search, the relevance scoring considers recency, so newer memories rank higher by default.
Configuration
No config needed to get started. The defaults work for local mode.
To change settings, go to OpenClaw Config > Plugins > Plugin Entries > Nowledge Mem Config. You'll see toggles for session context and session digest, number inputs for intervals and result counts, and text fields for remote server URL and API key.

Changes take effect after restarting OpenClaw.
| Setting | Default | What it does |
|---|---|---|
| Session context injection | off | Inject Working Memory + relevant memories at prompt time |
| Session digest at end | on | Capture conversations + distill key memories at session end |
| Minimum digest interval | 300s | Seconds between session digests (0 = no limit) |
| Max context results | 5 | Memories to inject at prompt time (1-20) |
| Server URL | empty | Remote server URL (empty = local) |
| API key | empty | API key for remote access |
Remote access
To connect to a Nowledge Mem server on another machine, set Server URL and API key in the plugin settings. The API key is passed only through the process environment. It never appears in logs or command history. See Access Mem Anywhere.
Troubleshooting
Plugin is installed but OpenClaw isn't using it
Check that plugins.slots.memory is exactly openclaw-nowledge-mem, and that you restarted OpenClaw after editing the config.
"Duplicate plugin id detected" warning
This happens if you previously installed the plugin locally (e.g. with --link) and then installed from npm. OpenClaw is loading it from both places. Fix it by removing the local path from your config:
Open ~/.openclaw/openclaw.json and delete the plugins.load.paths entry that points to the local plugin directory:
"load": {
"paths": []
}Then restart OpenClaw. The warning will be gone and only the npm-installed version will load.
Status shows not responding
nmem status
curl -sS http://127.0.0.1:14242/healthSearch returns too few results
Raise maxContextResults to 8 or 12.
Why Nowledge Mem?
Other memory tools store what you said as text and retrieve it by semantic similarity. Nowledge Mem is different.
Knowledge has structure. Every memory knows what type it is (decision, learning, plan, preference), when it happened, which source documents it came from, and how it relates to other memories. That's what makes search precise and reasoning reliable.
Knowledge evolves. The understanding you wrote today connects to the updated version you saved three months later. You can see how your thinking changed, without losing the intermediate steps.
Knowledge has provenance. Every piece of knowledge extracted from a PDF, document, or web page links back to its source. When the AI says "based on your March design doc," you can verify it.
Knowledge travels across tools. What you learned in Cursor, saved in Claude, refined in ChatGPT, all available in OpenClaw. Your knowledge belongs to you, not to any one tool.
Local first, no cloud required. Your knowledge lives on your machine. Remote access is available when you need it, not imposed by default.
How search ranking works: Search & Relevance.
For Advanced Users
OpenClaw's MEMORY.md workspace file still works for workspace context. Memory tool calls are handled by Nowledge Mem, but both can coexist.
The plugin communicates with Nowledge Mem through the nmem CLI. Local and remote modes behave identically. Configure the address once and every tool call routes correctly.
Related
- Integrations overview - native integrations, reusable packages, MCP, and browser capture
- Claude Code · Claude Desktop · Codex CLI · Alma · Raycast · Other Chat AI
References
- Plugin source: nowledge-mem-openclaw-plugin
- OpenClaw docs: Plugin system
- Changelog: CHANGELOG.md