Integrations
Connect any AI tool to your knowledge. Switch tools freely, your context stays.
Nowledge Mem connects to the tools you actually use. Your knowledge stays in one place; the tools come and go.
New user?
If you just installed Mem and are not sure which path applies to you, start with Start Here first. Then come back here once you know whether you need a native integration, the browser extension, or a custom path.
Choose Your Path
Nowledge Mem has multiple integration options, but the first decision is simple: use a dedicated native integration when your tool has one. Only drop to shared packages, direct MCP, CLI, or import flows when there is no tool-specific path.
For most people, the decision tree is:
- If your tool has a dedicated Nowledge integration, install that.
- If your work mainly happens in ChatGPT, Claude, or Gemini on the web, install the browser extension.
- If it does not have a dedicated package, but it supports shared skills or prompts, use a reusable workflow package.
- Only use direct MCP when the client supports MCP but there is no better dedicated path.
- Use the CLI when you want manual commands, scripts, or local automation.
| If you use... | Recommended path | Integration type | Why |
|---|---|---|---|
| Gemini CLI | Gemini CLI extension | Native integration | Dedicated extension with hooks, commands, skills, and real session import backed by nmem |
| Claude Code | Claude Code plugin | Native integration | Dedicated plugin with hooks, slash commands, Working Memory load, and automatic session capture |
| Droid | Droid plugin | Native integration | Dedicated Factory plugin with hooks, commands, skills, and clear handoff behavior backed by nmem |
| Cursor | Cursor plugin | Native integration | Dedicated plugin package with bundled MCP config, rules, skills, and clear handoff behavior |
| OpenClaw | OpenClaw | Native integration | Dedicated plugin with tool-specific lifecycle support, recall, and capture behavior |
| Alma | Alma | Native integration | Dedicated plugin with tool-specific lifecycle support and recall behavior |
| OpenCode and many coding agents that support shared skills | npx skills | Reusable workflow package | Reusable memory workflows without hand-writing your own prompts |
| MCP clients without a dedicated Nowledge package | Direct MCP | Direct MCP | Standard tool access through one shared server config |
| Chrome or Edge | Browser extension | Browser capture | Capture and distill conversations directly from the browser side panel |
| Terminals, scripts, or remote workflows | CLI | Direct commands | Use nmem directly for search, save, automation, and supported thread import |
| Codex CLI | Codex CLI guide | Tool-specific workflow package | Tool-specific prompt package plus AGENTS.md guidance and real session save |
| Notes you already own | Obsidian, Notion, Apple Notes | Local knowledge source | Search notes beside memories through AI Now on your machine |
| Exported or historical conversations | Threads guide | Import path | Import files, exports, and past sessions into Mem |
Most users only need one row
Find the tool you already use in the table above, open that guide, and ignore the rest for now.
Fastest Reusable Setup For Many Coding Agents
For OpenCode and other agent environments supported by the skills installer:
npx skills add nowledge-co/community/nowledge-mem-npx-skillsThis installs four skills: search-memory, read-working-memory, save-handoff, and distill-memory. After setup, your agent reads context at session start, routes recall across memories and threads when relevant, and saves resumable handoffs when asked.
If your tool has its own reusable package instead of the generic skills path, use that dedicated guide. Codex, for example, should follow the Codex CLI guide.
Prefer native integrations when available
Use the dedicated package when your tool has one: Claude Code, Gemini CLI, Droid, Cursor, OpenClaw, or Alma. These integrations add tool-specific behavior on top of the shared memory model.
Dedicated Nowledge Integrations And Full Session Capture
If you need the actual recorded session, not just a resumable summary, the path matters.
| Integration | Dedicated Nowledge package | Full session capture | Notes |
|---|---|---|---|
| Claude Code | Yes | Yes | Native plugin with lifecycle hooks and real session import. |
| Gemini CLI | Yes | Yes | Native extension with save-thread plus separate save-handoff. |
| Droid | Yes | Not in the plugin today | Native plugin exposes save-handoff. It intentionally does not claim save-thread until Droid has a real transcript importer. |
| Cursor | Yes | Not in the plugin today | Native plugin exposes save-handoff. Use in-app discovery/import for local Cursor conversations. |
| OpenClaw | Yes | Yes | Native plugin captures real sessions automatically. |
| Alma | Yes | Yes | Native plugin supports real session capture. |
| Codex CLI | Tool-specific package, not a Gemini-style plugin | Yes | Use the dedicated Codex prompt-pack path plus nmem t save --from codex. |
Generic npx skills agents | No dedicated runtime importer | No | Use save-handoff, not save-thread. Shared skills cannot honestly promise transcript-backed import across many hosts. |
Why generic skills cannot promise full session capture
Shared skills can shape prompting, but they do not control whether the host agent exposes readable session files or a stable transcript API. That is why generic npx skills uses save-handoff as the honest default, while dedicated integrations can expose real save-thread behavior only when that runtime truly supports it.
For the fuller thread matrix and import paths, see Threads.
Agent Intent Control For Custom Agents
Native integrations already bundle behavior rules that teach the agent when to read Working Memory, search past knowledge, or save durable insights.
If your agent does not have a dedicated plugin or extension, you should configure that intent directly.
Step 1: Give The Agent A Memory Surface
Use one of these:
npx skillsfor shared skill-based behaviornmemCLI for terminal-visible commands- MCP when the client can call tools directly
Step 2: Add A Short Intent Policy
Put a policy like this in AGENTS.md, CLAUDE.md, GEMINI.md, or the system prompt:
## Nowledge Mem
Use Nowledge Mem as your external memory system.
At session start:
- Run `nmem --json wm read` once to load current priorities and recent context.
- Do not re-read it on every turn unless the user asks or the session context changed materially.
Search proactively when:
- the user references previous work, a prior fix, or an earlier decision
- the task resumes a named feature, bug, refactor, or subsystem
- a debugging pattern resembles something solved earlier
- the user asks for rationale, preferences, procedures, or recurring workflow details
Retrieval routing:
- Start with `nmem --json m search` for durable knowledge.
- Use `nmem --json t search` when the user is asking about a prior discussion or exact conversation history.
- If a result includes `source_thread`, inspect it progressively with `nmem --json t show <thread_id> --limit 8 --offset 0 --content-limit 1200`.
When preserving knowledge:
- Use `nmem --json m add` for genuinely new durable knowledge.
- If an existing memory already captures the same decision, preference, or workflow and the new information refines it, use `nmem m update <id> ...` instead of creating a duplicate.
- Use a handoff save only when the user explicitly asks for a resumable checkpoint or handoff summary.For MCP-only agents, use the same policy but replace the commands with the tool names read_working_memory, memory_search, thread_search, thread_fetch_messages, memory_add, and memory_update.
Step 3: Keep The Prompt Direct
The best intent prompts are short and operational. Tell the agent exactly:
- when to read Working Memory
- when to search proactively
- when to use thread tools instead of memory search
- when to add a new memory versus update an existing one
- when handoff save is explicit-only
Model Context Protocol (MCP)
MCP is Nowledge Mem's compatibility layer for tools that speak MCP directly. Use it when your client supports MCP but does not have a dedicated Nowledge integration.
Packaged Integrations vs Direct MCP
| Path | Use when | Examples |
|---|---|---|
| Native integration | There is a dedicated Nowledge package for your tool | Claude Code plugin, Gemini CLI extension, Droid plugin, Cursor plugin, OpenClaw plugin, Alma plugin |
| Reusable workflow package | Your agent can install shared skills or prompt packs | npx skills, Codex prompt package |
| Direct MCP | Your client supports MCP and you want standard tool access | Cursor manual config, Claude Desktop, ChatWise, GitHub Copilot |
How to think about MCP
Packaged integrations are the recommended path when available. They may use nmem, MCP, tool-native hooks, or a mix internally, but users should think in terms of the integration they install, not the transport hidden underneath.
MCP Capabilities
- Search memories:
memory_search - Read Working Memory:
read_working_memory - Add memories:
memory_add - Update memories:
memory_update - List memory labels:
list_memory_labels - Save/Import threads:
thread_persist - Prompts:
sum(distill to memory),save(create a handoff summary thread)
MCP Server Configuration
ChatWiseSystem Prompts for Autonomous Behavior
For MCP-only apps to act autonomously, add these instructions to your agent's system prompt or CLAUDE.md/AGENTS.md:
## Nowledge Mem Integration
You have access to Nowledge Mem for knowledge management. Use these tools proactively:
**At Session Start (`read_working_memory`):**
- Call `read_working_memory` for today's briefing
- Understand user's active focus areas, priorities, and unresolved flags
- Reference this context naturally when it connects to the current task
**When to Search (`memory_search`):**
- Current topic connects to prior work
- Problem resembles past solved issue
- User asks about previous decisions ("why did we choose X?")
- Complex debugging that may match past root causes
**When to Save Memories (`memory_add`):**
- After solving complex problems or debugging
- When important decisions are made with rationale
- After discovering key insights ("aha" moments)
- When documenting procedures or workflows
- Skip: routine fixes, work in progress, generic Q&A
**When to Update Existing Memories (`memory_update`):**
- Search before saving when the topic looks familiar
- If recall already surfaced the same decision, preference, or workflow, update that memory instead of adding a near-duplicate
- Use updates when the new information refines, corrects, or extends durable knowledge
**When to Search Threads (`thread_search` / `thread_fetch_messages`):**
- User is asking about a prior discussion or exact conversation history
- A memory result points to a source thread
- Fetch messages progressively instead of dumping long threads all at onceThis enables autonomous memory operations in Claude Desktop, Cursor manual MCP setups, ChatWise, and other MCP-only apps.
Browser Extension
Capture memories from supported web AI chat platforms, with auto-capture, manual distill, and thread backup.
Threads
Import and manage conversations from coding agents, exported files, the API, or the command line.
Tool Guides
Browse the setup guide that matches the product you actually use. Some are native integrations, some are built-in paths, and some are reusable workflow packages.
If you want to verify that a guide really worked after setup, use How To Know Mem Is Working.
| Integration | What you get |
|---|---|
| Claude Code | Plugin with lifecycle hooks. Reads your briefing at start, saves at the right moment |
| Droid | Factory plugin with Working Memory bootstrap, routed recall, distillation, and resumable handoff summaries |
| Cursor | Plugin package with bundled MCP config, routed recall, distillation, and resumable handoffs |
| Claude Desktop | One-click extension. Search, save, and update memories in any conversation |
| Codex CLI | Prompt pack + optional AGENTS.md for Working Memory, routed recall, real session save, and distillation |
| Gemini CLI | Extension-native context, hooks, commands, and skills backed by nmem |
| Alma | Plugin with auto-recall and optional auto-capture |
| OpenClaw | Full setup guide with lifecycle and regression testing |
| Raycast | Five commands: search, add, read Working Memory, edit it locally, and explore graph connections |
| DeepChat · LobeHub | Built in. Toggle on in settings, no MCP config required |
Related guides: Claude Code · Droid · Cursor · Claude Desktop · Codex CLI · Gemini CLI · Alma · OpenClaw · Raycast · Other Chat AI
LLM-Friendly Documentation
Every page on this docs site is available as clean Markdown for AI agents and LLMs. Request any docs URL with the Accept: text/markdown header:
curl -H "Accept: text/markdown" https://mem.nowledge.co/docs/integrations| Endpoint | What it returns |
|---|---|
/llms-full.txt | All documentation pages concatenated into one file |
/llms.mdx/docs/<slug> | A single page as Markdown |
No authentication required.
API Integration
RESTful API for programmatic access.
Command Line Interface (CLI)
The nmem CLI provides terminal access to your knowledge base.
Installation
| Platform | Installation |
|---|---|
| macOS | Settings → Preferences → Developer Tools → Install CLI |
| Windows | Automatically installed with the app |
| Linux | Included with deb/rpm packages |
Quick Start
# Check connection
nmem status
# Search memories
nmem m search "project notes"
# Create a memory
nmem m add "Important insight" --title "Project Learnings"
# Save Claude Code/Codex/Gemini sessions
nmem t save --from claude-code
nmem t save --from codex -s "Summary of what was accomplished"
nmem t save --from gemini-cli -s "Summary of what was accomplished"AI Agent Integration
# JSON output for parsing
nmem --json m search "API design"
# Chain commands
ID=$(nmem --json m add "Note" | jq -r '.id')
nmem --json m update "$ID" --importance 0.9Command Reference
| Command | Alias | Description |
|---|---|---|
nmem status | Check server connection | |
nmem stats | Database statistics | |
nmem memories | nmem m | Memory operations |
nmem threads | nmem t | Thread operations |
Full Documentation
Run nmem --help or see the CLI Reference on GitHub.
Next Steps
- Troubleshooting: Common issues and solutions
- Background Intelligence: Knowledge graph, insights, and daily briefings