# Background Intelligence (/docs/advanced-features)
import VideoPlayer from "@/components/ui/video-player"
import { Step, Steps } from 'fumadocs-ui/components/steps';
You save a decision about PostgreSQL in January. In July, you record that you're migrating to CockroachDB. Six months apart, different contexts. Nowledge Mem links them, tracks the evolution, and the next time you search for either, both appear with the full trail of how your thinking changed.
This runs in the background. You open the app and the connections are there.
Background Intelligence requires a Pro license and a configured Remote LLM. Enable it in **Settings > Knowledge Processing**.
Knowledge Graph [#knowledge-graph]
Every memory becomes a node in a graph. The system extracts entities (people, technologies, concepts, projects) and maps how they relate to each other and to your existing knowledge.
The result: search "distributed systems" and find your memory about "Node.js microservices." The words don't match. The meaning does.
With Background Intelligence enabled, extraction runs automatically for new memories. You can also trigger it manually for older ones.
What Gets Extracted [#what-gets-extracted]
When a memory is processed, the LLM identifies:
* **Entities**: people, technologies, concepts, organizations, projects
* **Relationships**: how those entities connect
* **Links to existing knowledge**: connections to memories already in the graph
Trigger extraction for any memory by clicking **Knowledge Graph** on its card.
Knowledge Evolution [#knowledge-evolution]
When you save something new about a topic you've written about before, the system detects the relationship and creates a version link:
| Link type | Meaning | Example |
| -------------- | ---------------------- | ------------------------------------------------------------------- |
| **Replaces** | You changed your mind | "Use CockroachDB" replaces "Use PostgreSQL" |
| **Enriches** | You added depth | "React 19 adds a compiler" enriches "React 18 concurrent rendering" |
| **Confirms** | Independent agreement | Two separate reviews recommend the same library |
| **Challenges** | Contradiction detected | Your March assessment disagrees with your October conclusion |
You can trace how your understanding of any topic changed over time.
Community Detection [#community-detection]
Graph algorithms find natural clusters in your knowledge: groups of tightly connected memories that form coherent topics. Your graph might reveal clusters for "React Patterns," "API Design," and "Database Optimization." A map of your expertise you never drew by hand.
In **Graph View**, click **Compute** to run community detection.
Visual Exploration [#visual-exploration]
Your knowledge as an interactive network. Click a memory to see its connections. Zoom into clusters. Follow links between topics you never thought to compare.
The timeline slider filters by date range. Watch how your knowledge in a domain grew over weeks or months.
What the System Discovers [#what-the-system-discovers]
The graph is the foundation. On top of it, Background Intelligence actively analyzes your knowledge and surfaces findings in the Timeline.
Insights [#insights]
Insights are connections you wouldn't have found on your own.
* **Cross-domain links.** In March you noted that JWT refresh tokens were causing race conditions in the payment service. In September you chose the same token rotation pattern for a new auth service. The system catches it: same failure pattern, different project.
* **Temporal patterns.** "You've revisited this database migration decision 3 times in 2 months." Maybe it's time to commit.
* **Forgotten context.** "Your March assessment contradicts the approach you chose in October." The system remembers what you wrote, even when you don't.
Every insight cites its sources so you can trace the reasoning.
One insight that changes how you think beats ten that state the obvious. Strict quality gates keep the noise out.
Crystals [#crystals]
Five memories about React patterns saved over three months. Scattered across your timeline. Hard to piece together.
A crystal synthesizes them into one reference article. Sources are cited. When you save new related information, the crystal updates.
Crystals appear when the system has enough material to say something useful. You don't request them.
Flags [#flags]
Sometimes the system finds problems, not connections:
| Flag | What it means | Example |
| ---------------------- | -------------------------------- | ----------------------------------------------------------------- |
| **Contradiction** | Two memories disagree | "Use JWT tokens" vs. "Session cookies are more secure" |
| **Stale** | Newer knowledge supersedes older | A deployment guide from 6 months ago, overwritten by recent notes |
| **Needs verification** | Strong claim, no corroboration | A single memory making an assertion with no supporting evidence |
Each flag appears in the Timeline. You can dismiss it, acknowledge it, or link it to a resolution.
Working Memory [#working-memory]
Each morning, a briefing lands at `~/ai-now/memory.md`:
* **Active topics** based on recent activity
* **Unresolved flags** needing attention
* **Recent changes** in your knowledge base
* **Priority items** by frequency and recency
Any AI tool connected via MCP reads this file at session start. Your coding assistant already knows what you're working on before you say anything.
You can edit the file directly. Your changes are respected.
Your Working Memory at `~/ai-now/memory.md` is readable by any connected AI tool via MCP. Coding assistants, writing tools, and other agents check it before starting a task.
Configuration [#configuration]
Control background processing in **Settings > Knowledge Processing**:
| Setting | Default | What it controls |
| --------------------------- | ----------------- | ----------------------------------------------------- |
| **Background Intelligence** | Off | Master toggle for all background processing |
| **Daily Briefing** | On (when enabled) | Morning Working Memory generation |
| **Briefing Hour** | 8 | What hour the daily briefing runs (local time) |
| **Auto Extraction** | On (when enabled) | Automatic knowledge graph enrichment for new memories |
On Linux servers, configure via CLI:
```bash
nmem config settings set backgroundIntelligence true
nmem config settings set autoDailyBriefing true
nmem config settings set briefingHour 8
```
Next Steps [#next-steps]
* **[Getting Started](/docs/getting-started)**: The Timeline, document import, and all ways to add knowledge
* **[Integrations](/docs/integrations)**: Connect your AI tools via MCP and browser extensions
* **[Troubleshooting](/docs/troubleshooting)**: Common issues and solutions
# AI Now (/docs/ai-now)
import { Callout } from 'fumadocs-ui/components/callout';
import { Step, Steps } from 'fumadocs-ui/components/steps';
import { Tab, Tabs } from 'fumadocs-ui/components/tabs';
import { Telescope, FileText, Pencil, Presentation, Download, Plane, FastForward } from 'lucide-react';
import VideoPlayer from "@/components/ui/video-player";
AI Now is a personal AI agent running on your machine. It has full access to your knowledge base — every decision, insight, and document you've saved. It connects to Obsidian, Notion, Apple Notes, and any service through plugins.
It's not a chatbot. It has purpose-built capabilities: deep multi-source research, file and data analysis with visualization, presentation creation with live preview and export, and travel planning. Each one draws from your full context — your past decisions, your patterns, your history.
AI Now requires a configured **Remote LLM**.
Go to **Settings** → **Remote LLM** to set up, refer to [Remote LLMs](/docs/usage#remote-llms) for details.
Capabilities [#capabilities]
| Category | What it does |
| ---------------------- | ------------------------------------------------------------ |
| **Memory Search** | Finds relevant memories with semantic understanding |
| **Deep Research** | Multi-source research combining your memories and web search |
| **File Analysis** | Analyzes Excel, CSV, Word, PDF files you provide |
| **Data Visualization** | Generates charts from your data |
| **Presentations** | Creates slides with live preview and PowerPoint export |
| **Travel Planning** | Creates interactive day-by-day itineraries |
| **Integrations** | Connects to Notion, Obsidian, Apple Notes, and MCP servers |
Getting Started [#getting-started]
Configure Remote LLM [#configure-remote-llm]
Go to **Settings** → **Remote LLM** and add your API key.
Open AI Now [#open-ai-now]
Click the **AI Now** tab in the sidebar, or press Cmd/Ctrl + 5.
Start a Task [#start-a-task]
Ask anything. AI Now searches your memories when relevant:
> What architecture decisions have I made about caching?
It pulls from your memories, searches the web and connected notes (Notion, Obsidian, Apple Notes), and synthesizes a single answer.
You can also drop files or folders for instant analysis, request reports based on your recent work, or run a deep study on any topic.
AI Now creates or updates memories as it works.
Refer memories in your chat [#refer-memories-in-your-chat]
Use @ to search and mention specific memories in your conversation.
Deep Research [#deep-research]
For comprehensive research, AI Now runs parallel sub-tasks across multiple sources and synthesizes the results.
Click the Research toggle in the AI Now chat interface.
How It Works [#how-it-works]
Ask a research question:
> Research the current state of quantum error correction
AI Now will:
1. Search your memories for existing knowledge on the topic
2. Search the web from multiple angles
3. Synthesize findings into a single answer
4. Cite sources with reliability indicators
Skills [#skills]
Skills are specialized capabilities you enable for specific tasks.
| Skill | What it enables |
| ------------------------ | ----------------------------------------------------- |
| **Documents** | Excel/CSV analysis, chart generation, file operations |
| **Presentation Creator** | Slide generation with live preview and export |
| **Travel Planner** | Interactive itinerary creation |
Enable skills in **AI Now** → **Plugins** → **Skills**.
File Analysis [#file-analysis]
Attach files or folders to your conversation for analysis.
Toggle the Documents SKILL in AI Now Plugins to enable.
Supported Files [#supported-files]
| Type | Extensions | What AI Now Does |
| ---------------- | ------------------- | -------------------------------------------------- |
| **Spreadsheets** | .xlsx, .xls, .csv | Analyzes data, finds patterns, generates charts |
| **Documents** | .docx, .doc, .pdf | Summarizes, extracts key points, answers questions |
| **Code** | .py, .js, .ts, etc. | Reviews, explains, suggests improvements |
Example [#example]
1. Attach `sales_q4.xlsx`
2. Ask: "What are the top 3 trends in this data?"
3. AI Now analyzes and generates visualizations
Whole folders work too.
Presentations [#presentations]
Toggle the Presentation SKILL in AI Now Plugins to enable.
> Create a presentation based on our above study and research, include some charts or diagrams to support the insights
AI Now generates slides with structure, charts, and insights from your conversation.
Refine with follow-up requests ("Make the third slide more visual", "Add a slide about customer segments"), or click Edit to edit directly.
Export as PowerPoint with the PPTX button.
Travel Planning [#travel-planning]
Toggle the Travel Planner SKILL in AI Now Plugins to enable.
> Plan a 5-day trip to Tokyo focusing on food and culture
AI Now generates an interactive day-by-day itinerary using your recent memories and web research as context.
Plugins [#plugins]
Extend AI Now with connections to your other apps.
Built-in Plugins [#built-in-plugins]
Obsidian [#obsidian]
1. Go to **AI Now** → **Plugins**
2. Enable **Obsidian**
3. Set your vault path
AI Now can now search and read your Obsidian notes alongside your memories.
Notion [#notion]
1. Go to **AI Now** → **Plugins**
2. Enable **Notion**
3. Click **Connect** and authorize in the browser
AI Now can search your Notion pages and databases.
Apple Notes (macOS) [#apple-notes-macos]
1. Go to **AI Now** → **Plugins**
2. Enable **Apple Notes**
3. Grant permission when prompted
Custom MCP Plugins [#custom-mcp-plugins]
AI Now supports Model Context Protocol for custom integrations.
Go to **AI Now** → **Plugins** → **Custom Plugins**
Click **Add MCP Server**
Configure the server (stdio command or HTTP endpoint)
Click **Test Connection** to verify
Enable the plugin
MCP plugins with OAuth (GitHub, Slack, etc.) are detected automatically and prompt for authorization.
Session Management [#session-management]
Conversations are saved automatically. Click a previous session to resume, or create new sessions for parallel workstreams. Each session maintains its own history.
Auto-Approve Mode [#auto-approve-mode]
Enable Auto to skip confirmation prompts for file operations and other actions.
Auto-Approve grants AI Now permission to act without asking. Only enable for trusted workflows.
Tips [#tips]
* **Be specific**: "What did we decide about the database migration last month?" beats "database stuff"
* **Attach context**: drop files or mention notes with `@` for better results
* **Use sessions**: separate sessions for different projects or topics
Next Steps [#next-steps]
* **[Remote LLM Setup](/docs/usage#remote-llms)**: Configure your AI provider
* **[Integrations](/docs/integrations)**: Connect your AI tools
* **[Background Intelligence](/docs/advanced-features)**: How your knowledge grows on its own
# Nowledge Mem CLI (/docs/cli)
import { Step, Steps } from 'fumadocs-ui/components/steps';
import VideoPlayer from "@/components/ui/video-player";
The `nmem` CLI gives you terminal access to your Nowledge Mem knowledge base. Search memories, browse threads, read and edit Working Memory, explore the knowledge graph, and view your activity feed — all from the shell.
Installation [#installation]
Option 1: Standalone PyPI Package [#option-1-standalone-pypi-package]
Install on any machine — works with a local or remote Nowledge Mem server:
```bash
pip install nmem-cli
# or with uv
uv pip install nmem-cli
# or run without installing
uvx --from nmem-cli nmem --help
```
**Requirements:** Python 3.11+, Nowledge Mem running locally or reachable remotely.
The standalone package lets you reach your Nowledge Mem from servers, CI/CD pipelines, or remote workstations. See [Access Mem Anywhere](/docs/remote-access). View on [PyPI](https://pypi.org/project/nmem-cli/).
Option 2: Bundled with Desktop App [#option-2-bundled-with-desktop-app]
macOS [#macos]
Go to **Settings → Preferences → Developer Tools** and click **Install CLI**.
Installs to `~/.local/bin/nmem`. Make sure `~/.local/bin` is on your `PATH`:
```bash
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.zshrc && source ~/.zshrc
```
Windows [#windows]
The CLI is automatically available after app installation. Open a **new terminal window** to use `nmem`.
Linux [#linux]
Included with deb/rpm packages. The binary is placed in `/usr/local/bin/nmem`.
***
Quick Start [#quick-start]
```bash
nmem status # Check connection
nmem m search "project notes" # Search memories
nmem m add "Key insight" --title "Learning"
nmem wm # Read today's Working Memory
nmem f --days 1 # Today's activity
nmem g expand # Explore graph connections
nmem tui # Interactive terminal UI
```
***
Global Options [#global-options]
| Option | Description |
| ----------------- | ------------------------------------------- |
| `-j, --json` | Machine-readable JSON output |
| `--api-url ` | API URL (default: `http://127.0.0.1:14242`) |
| `-v, --version` | Show version |
| `-h, --help` | Show help |
**Aliases:** `m` = memories · `t` = threads · `wm` = working-memory · `g` = graph · `f` = feed · `c` = communities
***
Memory Commands (nmem m) [#memory-commands-nmem-m]
List memories [#list-memories]
```bash
nmem m # Recent 10 memories
nmem m -n 50 # List 50
nmem m --importance 0.7 # Minimum importance filter
```
Search [#search]
```bash
nmem m search "authentication patterns"
nmem m search "API design" --importance 0.8
nmem m search "deploy" -l devops -l backend # Filter by labels (AND)
nmem m search "sprint" --mode deep # Graph + LLM-enhanced results
```
**Bi-temporal search** — distinguish *when something happened* from *when you saved it*:
```bash
nmem m search "database decision" --event-from 2025-01 --event-to 2025-06
nmem m search "meeting notes" --recorded-from 2026-01-01
```
| Option | Description |
| -------------------- | --------------------------------------------------------- |
| `-n` | Max results |
| `-l, --label` | Filter by label (repeatable) |
| `--importance` | Minimum importance (0–1) |
| `--mode` | `normal` (default, fast) or `deep` (graph + LLM-enhanced) |
| `--event-from/to` | When the fact *happened* (YYYY, YYYY-MM, or YYYY-MM-DD) |
| `--recorded-from/to` | When it was *saved* to Nowledge Mem (YYYY-MM-DD) |
Add [#add]
```bash
nmem m add "We chose PostgreSQL for task events"
nmem m add "Prefer functional components in React" \
--title "Frontend conventions" \
--unit-type preference \
--importance 0.8 \
-l frontend -l react
# Record when something actually happened (bi-temporal)
nmem m add "Decided to sunset the legacy API" \
--unit-type decision \
--event-start 2025-11 \
--when past
```
| Option | Description |
| ------------------ | ------------------------------------------------------------------------------ |
| `-t, --title` | Memory title |
| `-i, --importance` | Importance 0–1 |
| `-l, --label` | Add label (repeatable) |
| `--unit-type` | `fact` `preference` `decision` `plan` `procedure` `learning` `context` `event` |
| `--event-start` | When it happened (YYYY, YYYY-MM, YYYY-MM-DD) |
| `--event-end` | End of a time range |
| `--when` | `past` `present` `future` `timeless` (default: timeless) |
Show [#show]
```bash
nmem m show
nmem m show --content-limit 500
```
Update [#update]
```bash
nmem m update --title "New title"
nmem m update --importance 0.9
nmem m update --content "Updated content"
```
Delete [#delete]
```bash
nmem m delete
nmem m delete -f # Skip confirmation
nmem m delete # Multiple IDs
```
***
Thread Commands (nmem t) [#thread-commands-nmem-t]
List and search [#list-and-search]
```bash
nmem t # Recent 20 threads
nmem t -n 50
nmem t search "architecture decisions"
```
Show [#show-1]
```bash
nmem t show
nmem t show -m 50 # Show up to 50 messages
nmem t show --content-limit 200
```
Create [#create]
```bash
# From text
nmem t create -t "Quick note" -c "Remember to review the API changes"
# From a file
nmem t create -t "Meeting notes" -f notes.md
# With structured messages
nmem t create -t "Chat session" \
-m '[{"role":"user","content":"Hello"},{"role":"assistant","content":"Hi!"}]'
# With a stable ID (idempotent — safe to re-run)
nmem t create -t "OpenClaw session" --id "openclaw-abc123-session"
```
Append [#append]
Add messages to an existing thread. Safely idempotent — duplicate messages are filtered by content hash or external ID.
```bash
# Single message
nmem t append -c "Follow-up note"
# Structured messages
nmem t append \
-m '[{"role":"user","content":"Question"},{"role":"assistant","content":"Answer"}]'
# With idempotency key (safe for retries / repeated hook fires)
nmem t append \
-m '[{"role":"user","content":"msg"}]' \
--idempotency-key "oc-batch-session-001"
```
Save Claude Code / Codex session [#save-claude-code--codex-session]
```bash
nmem t save --from claude-code # Save Claude Code session
nmem t save --from codex # Save Codex session
nmem t save --from codex -s "Summary" # With session summary
```
| Option | Description |
| --------------- | --------------------------------------------- |
| `--from` | `claude-code` or `codex` (required) |
| `-p, --project` | Project directory path (default: current dir) |
| `-m, --mode` | `current` (latest) or `all` sessions |
| `--session-id` | Specific session ID (Codex only) |
| `-s, --summary` | Brief session summary |
| `--truncate` | Truncate large tool results (>10KB) |
Delete [#delete-1]
```bash
nmem t delete
nmem t delete -f # Force
nmem t delete --cascade # Also delete associated memories
```
***
Working Memory (nmem wm) [#working-memory-nmem-wm]
Working Memory is the AI-generated daily briefing — focus areas, open questions, and recent activity. The Knowledge Agent updates it each morning.
Read [#read]
```bash
nmem wm # Today's Working Memory
nmem wm --date 2026-02-12 # Archived date
nmem wm history # List available archived dates
```
Edit [#edit]
```bash
nmem wm edit # Opens $EDITOR
nmem wm edit -m "## Focus Areas\n- Ship v0.6" # Set directly
```
Patch a section (non-destructive) [#patch-a-section-non-destructive]
Replace or append to one section without touching the rest of the document:
```bash
# Replace a section
nmem wm patch --heading "## Focus Areas" --content "- Finish OpenClaw plugin release"
# Append to a section
nmem wm patch --heading "## Notes" --append "Reminder: deploy to staging tonight"
```
The heading is matched case-insensitively and partially — `"Focus"` matches `"## Focus Areas"`.
***
Graph Commands (nmem g) [#graph-commands-nmem-g]
Expand graph neighborhood [#expand-graph-neighborhood]
Explore connected memories, entities, crystals, and source documents around a given memory:
```bash
nmem g expand
nmem g expand --depth 2 # Two hops out
nmem g expand -n 10 # Limit neighbors per hop
```
Show EVOLVES version chain [#show-evolves-version-chain]
See how a memory has been refined or superseded over time:
```bash
nmem g evolves
```
***
Feed (nmem f) [#feed-nmem-f]
The activity feed shows what was saved, learned, synthesized, or ingested — chronologically.
```bash
nmem f # Last 7 days (high-signal events)
nmem f --days 1 # Today only
nmem f --days 30 # Last 30 days
nmem f --type crystal_created # Only crystal synthesis events
nmem f --from 2026-02-10 --to 2026-02-14 # Exact date range
nmem f --all # Include low-signal background events
nmem f -n 50 # Limit events (default: 100)
```
| Option | Description |
| ---------------- | ------------------------------------------------ |
| `--days` | How many days back (default: 7; use 1 for today) |
| `--type` | Filter by event type |
| `-n, --limit` | Max events to fetch (default: 100) |
| `--all` | Include low-signal background events |
| `--from`, `--to` | Exact date range (YYYY-MM-DD) |
**Event types:** `memory_created` · `crystal_created` · `insight_generated` · `source_ingested` · `source_extracted` · `daily_briefing` · `url_captured`
***
Knowledge Communities (nmem c) [#knowledge-communities-nmem-c]
Browse topic clusters automatically detected in your knowledge graph:
```bash
nmem c # List communities
nmem c -n 20
nmem c show # Show community details (entities, memories)
nmem c detect # Trigger community detection (background)
```
***
Configuration & Models [#configuration--models]
Embedding model [#embedding-model]
```bash
nmem models status # Check current model status
nmem models download # Download the embedding model
nmem models reindex # Rebuild the search index
```
LLM provider [#llm-provider]
```bash
nmem config provider list
nmem config provider set openai --api-key sk-xxx --model gpt-4o
nmem config provider test
```
Processing settings [#processing-settings]
```bash
nmem config settings # Show all settings
nmem config settings set briefingHour 8 # Change morning briefing time
```
License [#license]
```bash
nmem license status
nmem license activate
nmem license deactivate # Deactivate license on this device
```
***
Remote Access [#remote-access]
```bash
# LAN / private network
export NMEM_API_URL=http://192.168.1.100:14242
nmem status
# Cloudflare tunnel (from desktop app: Settings → Access Mem Anywhere)
export NMEM_API_URL=https://
export NMEM_API_KEY=nmem_...
nmem m search "notes"
# One-off without env vars
nmem --api-url https:// status
```
| Variable | Description | Default |
| -------------- | --------------------- | ------------------------ |
| `NMEM_API_URL` | API server URL | `http://127.0.0.1:14242` |
| `NMEM_API_KEY` | API key (Bearer auth) | *(unset)* |
Full guide: [Access Mem Anywhere](/docs/remote-access).
***
JSON Output [#json-output]
Add `--json` (or `-j`) before the subcommand for machine-readable output:
```bash
nmem --json m search "API design" | jq '.memories[0].id'
nmem --json m add "Note" | jq -r '.id'
nmem --json f --days 1 | jq '.events[].title'
```
Search response [#search-response]
```json
{
"query": "API design",
"total": 3,
"search_mode": "fast_bm25_vector",
"memories": [
{
"id": "abc123-def456-...",
"title": "REST API versioning decision",
"content": "We use /v1/ prefix for all public endpoints...",
"score": 0.91,
"relevance_reason": "Text Match (89%) + Semantic Match (73%) | decay[imp:high]",
"importance": 0.8,
"labels": ["architecture", "api"],
"event_start": "2025-09",
"temporal_context": "past",
"source": "cli"
}
]
}
```
Feed response [#feed-response]
```json
{
"events": [
{
"id": "evt-...",
"event_type": "memory_created",
"severity": "info",
"title": "Memory or event title",
"description": "Summary text...",
"metadata": { "source": "claude", "unit_type": "fact" },
"related_memory_ids": ["..."],
"created_at": "2026-02-20T02:35:28+00:00"
}
]
}
```
Error response [#error-response]
```json
{
"error": "api_error",
"status_code": 404,
"detail": "Memory not found"
}
```
***
Status and Statistics [#status-and-statistics]
```bash
nmem status
# nmem v0.6.2
# status ok
# api http://127.0.0.1:14242
# database connected
nmem stats
# Database Statistics
# memories 83
# threads 27
# entities 248
# labels 177
# communities 32
```
***
AI Agent Integration [#ai-agent-integration]
The `--json` flag and stable exit codes make `nmem` easy to drive from AI agents.
```bash
# Search for context before responding
nmem --json m search "authentication flow" | jq '.memories[:3]'
# Save an insight
nmem m add "Rate limiting is per-user, not per-IP" \
--unit-type learning --importance 0.8 -l backend
# Save a decision with when it was made
nmem m add "Chose Postgres over MySQL for task events" \
--unit-type decision --event-start 2026-02 -l architecture
# Browse what was worked on last week
nmem --json f --days 7 | jq '.events[].title'
# Create a session thread backup
nmem t create -t "Debug session $(date +%Y%m%d)" \
-m '[{"role":"user","content":"Investigate auth failures"},{"role":"assistant","content":"Found rate limit issue"}]'
```
***
TUI [#tui]
An interactive terminal UI for browsing memories, threads, and the knowledge graph:
```bash
nmem tui
```
***
Troubleshooting [#troubleshooting]
**"command not found: nmem"**
* PyPI install: `pip install nmem-cli` (Python 3.11+)
* Run without installing: `uvx --from nmem-cli nmem --help`
* macOS desktop: Settings → Preferences → Developer Tools → Install CLI → then ensure `~/.local/bin` is on your PATH
* Windows: open a new terminal after app installation
**"Cannot connect to server"**
1. Ensure Nowledge Mem is running
2. Try: `nmem --api-url http://127.0.0.1:14242 status`
3. Check for proxy or VPN blocking localhost
# Community & Support (/docs/community)
import { Card as RawCard, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card"
import { MessageSquare, Twitter, Github, Mail, Users, BookOpen, MessageCircle, AlertTriangle, Lightbulb } from "lucide-react"
Community Channels [#community-channels]
Get Support [#get-support]
Documentation [#documentation]
* **[Getting Started](/docs/getting-started)** - Set up and create your first memories
* **[Integrations](/docs/integrations)** - Connect with AI tools via MCP and browser extensions
* **[Background Intelligence](/docs/advanced-features)** - Knowledge graph, insights, crystals, and working memory
* **[Troubleshooting](/docs/troubleshooting)** - Common issues and solutions
Report Issues & Request Features [#report-issues--request-features]
Pro plan users receive access to a dedicated Pro Discord channel and direct IM support. [Learn more about Pro](/docs/mem-pro).
# Getting Started (/docs/getting-started)
import VideoPlayer from "@/components/ui/video-player";
import { Step, Steps } from 'fumadocs-ui/components/steps';
The Timeline [#the-timeline]
Open Nowledge Mem. You see one input and a timeline below it.
Save a thought [#save-a-thought]
Type a decision, an insight, anything worth keeping. Hit enter.
Nowledge Mem handles the rest: title, key concepts, graph connections. You just write. Open the Graph view later and you'll see it already linked to related memories.
Ask a question [#ask-a-question]
Type a question: *"What did I decide about authentication last month?"*
The answer comes from **your own knowledge**: not the internet. Every question searches your full memory and synthesizes an answer from what you've written and saved.
Drop a URL or file [#drop-a-url-or-file]
Paste a URL. The page gets fetched, parsed, and stored as a searchable source. Drop a PDF, a Word doc, a presentation. Same treatment. Each input grows your knowledge base.
Connect Any Tool [#connect-any-tool]
One command installs the full skill set:
```bash
npx skills add nowledge-co/community/nowledge-mem-npx-skills
```
Works with Claude Code, Cursor, Codex, OpenCode, OpenClaw, Alma, and 20+ other agents. After setup, your agent starts each session with your context, searches your knowledge mid-task, and saves what it learns.
If OpenClaw is your first tool, use the 5-minute guide:
* **[OpenClaw in 5 Minutes](/docs/integrations/openclaw)**
For a lighter setup, open **Settings > Preferences** and install the CLI skill from **Developer Tools**. This gives agents core search and recall without the full autonomous workflow.
Or configure MCP directly [#or-configure-mcp-directly]
For any MCP-compatible tool, add this to its MCP settings:
```json
{
"mcpServers": {
"nowledge-mem": {
"url": "http://127.0.0.1:14242/mcp",
"type": "streamableHttp"
}
}
}
```
Claude Desktop [#claude-desktop]
[Download the extension](/docs/integrations#claude-desktop). One-click install, no config.
See [Integrations](/docs/integrations) for all tool-specific guides.
More Ways In [#more-ways-in]
* **AI conversations**: the [browser extension](/docs/integrations#browser-extension) captures insights from ChatGPT, Claude, Gemini, and 13+ platforms
* **Thread files**: [import](/docs/integrations#thread-file-import) exported conversations from Cursor, ChatGPT, or ChatWise
* **Manual**: create memories in the Memories view with **+ Create**, or from any terminal with `nmem m add` ([CLI reference](/docs/cli))
Come Back Tomorrow [#come-back-tomorrow]
Here's what happens after a few days of normal use:
**Tuesday** — you save a decision: "Using PostgreSQL for the new service." **Thursday** — you mention CockroachDB as a possible migration target. **Friday morning** — your briefing at `~/ai-now/memory.md` notes: "Your database thinking is evolving. PostgreSQL decision (Tuesday) now in tension with CockroachDB consideration (Thursday)." You didn't connect these yourself. Mem did.
This is **Background Intelligence** at work:
* **Knowledge evolution.** Mem detects when your thinking on a topic changes and links the versions together, with the full trail.
* **Crystals.** When enough memories cover the same ground, Mem synthesizes them into a reference article you can cite.
* **Flags.** Contradictions between your past and present thinking surface in the Timeline. You decide what to do.
* **Working Memory.** A daily briefing at `~/ai-now/memory.md`. Your AI tools read it at session start — they know what you're working on before you say anything.
None of this requires action from you. It shows up in the Timeline.
Background intelligence requires a [Pro license](/docs/mem-pro) and a configured Remote LLM.
Next Steps [#next-steps]
* **[Using Nowledge Mem](/docs/usage)**: Daily workflow: search, briefings, and how your tools use your knowledge
* **[AI Now](/docs/ai-now)**: Personal AI with full access to your knowledge base
* **[Background Intelligence](/docs/advanced-features)**: Knowledge graph, insights, crystals, and daily briefings
* **[Integrations](/docs/integrations)**: Connect your AI tools
* **[Access Mem Anywhere](/docs/remote-access)**: Reach your Mem from other laptops, agent nodes, and browser tools with URL + API key
# Nowledge Mem (/docs)
import { Card as RawCard, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card"
import { ArrowRight, Zap, Bot, Network, Sparkles } from "lucide-react"
import VideoPlayer from "@/components/ui/video-player"
Your AI tools forget everything. Nowledge Mem doesn't.
Save a decision, an insight, a breakthrough — it links to everything else you know. A knowledge graph grows as you work, tracking how your thinking evolves. Overnight, the system finds connections you missed and writes your AI tools a morning briefing.
Every tool you connect shares the same knowledge. Claude Code, Cursor, Codex, ChatGPT, whatever comes next. Explain something once. Every tool knows it.
Connect Any Tool [#connect-any-tool]
Works with anything that speaks MCP, plus browser extensions and direct plugins.
Skill-based plugin with autonomous memory access
First-time setup guide for Nowledge Mem memory plugin
MCP integration for memory search and creation
One-click extension installation
Capture conversations from ChatGPT, Gemini, and 13+ platforms
Import Your Documents [#import-your-documents]
Drop a PDF, Word doc, or presentation into the Library. It gets parsed and indexed alongside your memories. When you ask a question in the Timeline, the answer draws from both.
Local-First Privacy [#local-first-privacy]
Everything runs on your device. No cloud, no accounts. You can connect a remote LLM when you want stronger processing, but your data never touches Nowledge servers.
Get started in minutes
Your first five minutes
# Installation (/docs/installation)
import { DragToApplicationsAnimation } from '@/components/docs/drag_install';
import { InstallationSteps } from '@/components/docs/installation-steps';
import { Step, Steps } from 'fumadocs-ui/components/steps';
import { Tab, Tabs } from 'fumadocs-ui/components/tabs';
import { ExternalLink, Download } from 'lucide-react';
import { Button } from '@/components/ui/button';
Nowledge Mem is currently in **private alpha**. To get download access:
* **Join the waitlist**: Submit your email [here](https://nowled.ge/alpha) and we'll send you the download link in hours.
* **Get instant access**: [Pro plan](/pricing) subscribers receive immediate download access
Already have access? You'll find a download link in your alpha invitation email. Check the **spam** inbox if you don't see it.
System Requirements [#system-requirements]
Minimum system requirements:
| Requirement | Specification |
| -------------------- | ------------------------------------------------------------------------------------------- |
| **Operating System** | macOS 15 or later with Apple Silicon
Windows 10 or later |
| **Memory (RAM)** | 16 GiB minimum |
| **Disk Space** | 10 GiB available |
| **Network** | If using a proxy, ensure it bypasses `127.0.0.1` and `localhost` |
**Linux servers** are supported in headless mode. See the **[Linux Server Deployment](/docs/server-deployment)** guide to run Nowledge Mem on servers without a desktop environment.
Installation Steps [#installation-steps]
Step 1: Install the Application [#step-1-place-app]
Drag Nowledge Mem to your `/Applications` folder.
Install from the Microsoft Store.
Search for "Nowledge Mem" in the [Microsoft Store](https://apps.microsoft.com/detail/9ntrknn2w5dq?hl=en-us\&gl=US\&ocid=pdpshare), or click below button to open the Microsoft Store.
Click the **Install** button to install Nowledge Mem.
Step 2: Launch the Application [#step-2-first-boot]
Double-click the Nowledge Mem icon in your Applications folder to launch the app for the first time.
If the app takes too long to start or shows errors:
* **Service timeout**: If you see "It took too long to start the service", this usually means a global proxy is preventing access to `localhost`. Disable your proxy and try again.
* **macOS version**: Ensure you're running macOS 15 or later. Older versions are not supported.
* **Need more help?** Check the [Troubleshooting Guide](/docs/troubleshooting) to view logs and get detailed diagnostics. You can share logs with our community or email support for assistance.
After the Installation is completed, Nowledge Mem will be automatically launched.
To launch the app manually, click Open on Nowledge Mem in Microsoft Store or click the Start menu and search for "Nowledge Mem".
If the app takes too long to start or shows errors:
* **Service timeout**: If you see "It took too long to start the service", this usually means a global proxy is preventing access to `localhost`. Disable your proxy and try again.
* **Need more help?** Check the [Troubleshooting Guide](/docs/troubleshooting) to view logs and get detailed diagnostics. You can share logs with our community or email support for assistance.
Step 3: Download AI Models [#step-3-download-models]
After launching Nowledge Mem, you'll need to download the local AI models (approximately 2.4GB total):
* **Apple Chip Mac**: On-device LLM is supported.
* **Windows**: Remote LLM is required.
* **Intel Mac**: Remote LLM is required.
* **Linux**: Remote LLM is required.
**Check notifications**: You'll see download prompts in the top-right corner of the app
**Navigate to models**: Click the notification button, or go to **Settings** → **Models**
**Install models**: Click **Install** on the LLM model card
The download will begin automatically, and you can monitor the progress:
Depending on your internet connection, the download may take 5-15 minutes. The models only need to be downloaded once.
Step 4: Install the Browser Extension [#step-4-browser-extension]
The **Nowledge Mem Exchange** browser extension captures insights from your AI conversations on ChatGPT, Claude, Gemini, and 13+ other platforms.
After installing, click the extension icon to open the SidePanel. Configure your LLM provider in **Settings** to enable auto-capture.
ChatGPT, Claude, Gemini, Perplexity, DeepSeek, Kimi, Qwen, POE, Manus, Grok, and more. The extension monitors your conversations and saves valuable insights: decisions, discoveries, and conclusions. Routine Q\&A is skipped. See the [Browser Extension guide](/docs/integrations#browser-extension) for details.
Next Steps [#next-steps]
* **[Getting Started](/docs/getting-started)**: Your first five minutes with the Timeline
* **[Integrations](/docs/integrations)**: Connect Claude Code, Cursor, and other AI tools
* **[Linux Server Deployment](/docs/server-deployment)**: Run headless on a Linux server
# Library (/docs/library)
import { Step, Steps } from 'fumadocs-ui/components/steps';
import VideoPlayer from "@/components/ui/video-player"
Drop a 40-page architecture review into the Library. Ask in the Timeline: *"What does the review say about API rate limits?"* The answer cites page 12 of the document and a Redis decision you saved three months ago. Your documents and your memories search together.
The Library stores PDFs, Word files, presentations, and Markdown. Content is parsed, split into searchable segments, and indexed. Every document becomes searchable from the Timeline, global search, and connected AI tools via MCP.
Supported Formats [#supported-formats]
| Format | Extensions | What Happens |
| ----------------- | ----------- | ------------------------------------------------------- |
| **PDF** | .pdf | Text extracted, split into segments, indexed for search |
| **Word** | .docx, .doc | Parsed to text, segmented, indexed |
| **Presentations** | .pptx | Slide content extracted and indexed |
| **Markdown** | .md | Parsed and indexed directly |
Adding Documents [#adding-documents]
Drag files into the Timeline input, or use the Library view to import.
Documents go through a processing pipeline:
1. **Parsing**: content extracted from the file format
2. **Segmentation**: split into searchable chunks
3. **Indexing**: added to both vector and keyword search indexes
Processing status is visible in the Library view. Once indexed, the document's content is searchable from the Timeline, global search, and connected AI tools via MCP.
Searching Documents [#searching-documents]
Documents are searched alongside memories. A Timeline question like *"What does the Q4 report say about churn?"* searches both your saved memories and any imported documents that match.
In the Library view, you can also browse and search documents directly.
How It Connects [#how-it-connects]
Documents in the Library are sources for your knowledge base, not memories themselves. The distinction:
* **Memories** are atomic insights, decisions, or facts you or the system extracted
* **Documents** are reference material you imported whole
When you distill a document, individual insights can be extracted as memories and connected to the knowledge graph. The document remains in the Library as the source.
Next Steps [#next-steps]
* **[Getting Started](/docs/getting-started)**: The Timeline and all ways to add knowledge
* **[Background Intelligence](/docs/advanced-features)**: How imported knowledge connects to your graph
* **[Search & Relevance](/docs/search-relevance)**: How search ranks results across memories and documents
# Mem Pro Plan (/docs/mem-pro)
import { Card as RawCard, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card"
import { Badge } from "@/components/ui/badge"
import { Button } from "@/components/ui/button"
import { ArrowRight, Download } from "lucide-react"
import { Step, Steps } from 'fumadocs-ui/components/steps';
Free vs Pro Plans [#free-vs-pro-plans]
Nowledge Mem offers two plans: **Free** and **Pro**.
The **Pro** plan unlocks unlimited memories, remote LLM integration (BYOK), and background intelligence features.
For detailed feature comparisons, visit the [Pricing Page](https://mem.nowledge.co/pricing).
Activating Your Lifetime Pro License [#activating-your-lifetime-pro-license]
Visit the pricing page and click the **Lifetime Pro** button to proceed to checkout:
Complete the payment using your email address.
Your email address will be used to receive the license key and is permanently associated with your Pro plan activation.
You'll receive an email with your license key.
You can retrieve your license key anytime at mem.nowledge.co/licenses using your email address.
Open Nowledge Mem and navigate to **Settings** → **Plans**:
Enter your email address and license key, then click **Activate License**:
Once activated, your Pro plan status will be displayed:
Manage your activated devices anytime at mem.nowledge.co/licenses.
Need help? Contact [hello@nowledge-labs.ai](mailto:hello@nowledge-labs.ai) for assistance with activation or licensing.
# Access Mem Anywhere (/docs/remote-access)
import { Callout } from 'fumadocs-ui/components/callout';
import { Step, Steps } from 'fumadocs-ui/components/steps';
Nowledge Mem can expose your local API through Cloudflare Tunnel. You get a public URL, and every request is still protected by your Mem API key.
Use this when you want one memory center across your laptop, desktop, agent nodes, and browser tools.
Choose Your Connection Type [#choose-your-connection-type]
| Type | Best for | What URL you get |
| ---------------------- | ---------------------------- | --------------------------------------------------------------------- |
| **Quick link** | Fast setup in under a minute | Random `*.trycloudflare.com` URL |
| **Cloudflare account** | Daily/long-term usage | Stable URL on your own domain (for example `https://mem.example.com`) |
Before You Start [#before-you-start]
Open this guide from **Settings → Access Mem Anywhere → Guide**.
* Quick link needs no Cloudflare account and no domain.
* Cloudflare account mode requires a domain already managed in your Cloudflare account.
* If you do not have a domain in Cloudflare yet, use **Quick link** first.
* In Cloudflare account mode, the final public URL appears only after you create a hostname route.
Path A: Quick Link (No Account) [#path-a-quick-link-no-account]
Open remote access in Mem [#open-remote-access-in-mem]
Open **Settings → Access Mem Anywhere**.
Turn on **Allow devices on same Wi-Fi** if you also want LAN access.
Choose Quick link and start [#choose-quick-link-and-start]
In **Access from Anywhere**, choose **Quick link**, then click **Start**.
Wait for status to become **Live**.
Copy URL and API key [#copy-url-and-api-key]
In **Ready to connect**, copy:
* **URL**
* **API key**
Use **Rotate** if you want to issue a fresh key.
Verify from another machine [#verify-from-another-machine]
```bash
export NMEM_API_URL="https://"
export NMEM_API_KEY="nmem_..."
nmem status
```
Expected: `status ok`.
Path B: Cloudflare Account (Stable URL) [#path-b-cloudflare-account-stable-url]
You need a domain already in Cloudflare DNS (for example `example.com`) before this path can produce a stable URL.
Create a tunnel and copy the token [#create-a-tunnel-and-copy-the-token]
In Cloudflare Zero Trust:
1. Open Networks → Connectors → Create a tunnel.
2. Click Select Cloudflared.
3. Name the tunnel and click Save tunnel.
4. In **Install and run connectors**, copy the token from a command like:
```bash
sudo cloudflared service install ...
```
In Mem Desktop, you can paste either:
* the raw token, or
* the full command line (supported forms: `service install `, `--token `, `--token=`).
Mem extracts the token automatically.
Create a public hostname route [#create-a-public-hostname-route]
In tunnel routing / hostname routes:
1. Create a hostname (for example `mem.example.com`).
2. Bind it to the tunnel you created.
This step creates your stable public URL.
Map the hostname to local Mem API [#map-the-hostname-to-local-mem-api]
1. Open Networks → Connectors → your tunnel.
2. In Published application routes, click Add a published application route.
3. Map `mem.example.com` to your local Mem server:
* Subdomain: `mem`
* Domain: your Cloudflare-managed domain
* Service Type: `HTTP`
* Service URL: `http://127.0.0.1:14242`
Do not append `/remote-api`.
Save and start in Mem [#save-and-start-in-mem]
Back in Settings → Access Mem Anywhere → Cloudflare account:
* Public URL: `https://mem.example.com`
* Tunnel token: paste raw token or full `cloudflared` command
Then:
* Click Save
* Click Start
* Click Rotate if you want a fresh key
* Click Copy to copy URL and API key
Verify from another machine [#verify-from-another-machine-1]
```bash
export NMEM_API_URL="https://mem.example.com"
export NMEM_API_KEY="nmem_..."
nmem status
```
Expected: `status ok`.
Use It on Other Clients [#use-it-on-other-clients]
nmem CLI [#nmem-cli]
```bash
export NMEM_API_URL="https://"
export NMEM_API_KEY="nmem_..."
nmem status
nmem m search "project notes"
```
Browser Extension (SidePanel) [#browser-extension-sidepanel]
Open any supported AI chat page, then open **Nowledge Mem Exchange** in the browser SidePanel:
1. Click **Settings**
2. In **Access Mem Anywhere**, paste the terminal setup copied from Mem Desktop:
```bash
export NMEM_API_URL="https://"
export NMEM_API_KEY="nmem_..."
```
3. Click **Fill URL + key**
4. Click **Save**
5. Click **Test connection** (should show success)
You can also type URL + key manually in the same section.
OpenClaw Plugin [#openclaw-plugin]
Two options — pick whichever fits your setup:
**Option A — Plugin config (recommended)**
Add `apiUrl` and `apiKey` directly to your plugin entry in `~/.openclaw/openclaw.json`:
```json
{
"plugins": {
"slots": { "memory": "openclaw-nowledge-mem" },
"entries": {
"openclaw-nowledge-mem": {
"enabled": true,
"config": {
"autoRecall": true,
"autoCapture": false,
"maxRecallResults": 5,
"apiUrl": "https://",
"apiKey": "nmem_..."
}
}
}
}
}
```
The key is passed to the `nmem` subprocess via environment variable only — it never appears in logs or process arguments.
**Option B — Environment variables**
Set these in your shell before starting OpenClaw:
```bash
export NMEM_API_URL="https://"
export NMEM_API_KEY="nmem_..."
```
Both options are equivalent. Use Option A if OpenClaw runs as a service or you want the config self-contained. Use Option B to keep credentials out of config files.
MCP / Agent Nodes [#mcp--agent-nodes]
MCP clients connect via HTTP — pass your API key in the `Authorization` header.
**Cursor** (`~/.cursor/mcp.json` or workspace `.cursor/mcp.json`):
```json
{
"mcpServers": {
"nowledge-mem": {
"url": "https:///mcp",
"type": "streamableHttp",
"headers": {
"APP": "Cursor",
"Authorization": "Bearer nmem_..."
}
}
}
}
```
**Claude Desktop** (`~/Library/Application Support/Claude/claude_desktop_config.json`):
```json
{
"mcpServers": {
"nowledge-mem": {
"url": "https:///mcp",
"type": "streamableHttp",
"headers": {
"APP": "Claude",
"Authorization": "Bearer nmem_..."
}
}
}
}
```
**Codex CLI** (`~/.codex/config.toml`):
```toml
[mcp_servers.nowledge-mem]
url = "https:///mcp"
[mcp_servers.nowledge-mem.http_headers]
APP = "Codex"
Authorization = "Bearer nmem_..."
```
**Claude Code / CI / other shell-based agents** — environment variables work too:
```bash
export NMEM_API_URL="https://"
export NMEM_API_KEY="nmem_..."
```
Quick Health Check [#quick-health-check]
```bash
curl -H "Authorization: Bearer $NMEM_API_KEY" "$NMEM_API_URL/health"
```
Expected: health JSON response.
If wrong key:
```bash
curl -H "Authorization: Bearer wrong_key" "$NMEM_API_URL/health"
```
Expected: `401`.
If your proxy strips auth headers:
```bash
curl "$NMEM_API_URL/health?nmem_api_key=$NMEM_API_KEY"
```
Security and Operations [#security-and-operations]
* API key is required for every remote request.
* Rotate key anytime in Settings (old key becomes invalid immediately).
* After your first successful **Start**, tunnel reconnects automatically on app restart until you click **Stop**.
* Browse-Now / Browser Bridge automation endpoints are local-only and are not exposed through Access Anywhere.
* Stop tunnel when remote access is not needed.
Troubleshooting [#troubleshooting]
* **Start timed out**: your network/proxy may block Cloudflare traffic. Retry, or switch to Cloudflare account mode.
* **`401 Missing API key`**: proxy likely removed auth headers. Update `nmem`, or use query fallback for manual checks.
* **`429 Too many invalid auth attempts`**: wrong key was retried repeatedly. Re-copy key or click **Rotate**.
# Search & Relevance (/docs/search-relevance)
import { Callout } from 'fumadocs-ui/components/callout';
How Nowledge Mem finds matching memories, ranks them by relevance, and learns from your usage patterns.
The Scoring Pipeline [#the-scoring-pipeline]
Search combines multiple signals to rank results beyond keyword matching.
Semantic Scoring [#semantic-scoring]
This track finds memories that match what you're looking for:
* **Meaning-based search**: Finds memories by semantic similarity, not just exact words. Search for "design patterns" and find memories about "architectural approaches."
* **Keyword search**: Catches exact phrases and technical terms using BM25 ranking.
* **Label matching**: Surfaces memories with matching tags.
* **Graph traversal**: Discovers connected memories through entities and topic communities.
Decay & Temporal Scoring [#decay--temporal-scoring]
This track adjusts results based on freshness and your usage:
* **Recency**: Recently accessed memories score higher. We use exponential decay with about a 30-day half-life.
* **Frequency**: Memories you access repeatedly become more durable (logarithmic scaling with diminishing returns).
* **Importance floor**: High-importance memories maintain minimum accessibility even when unused.
* **Temporal matching**: Boosts memories whose event time matches your query (deep mode only).
These tracks combine into a final score that determines result ranking.
Memory Decay [#memory-decay]
Memories fade over time unless reinforced by use.
How It Works [#how-it-works]
**Recency**: A memory accessed yesterday scores much higher than one from three months ago. The 30-day half-life means scores roughly halve each month without access.
**Frequency**: Your 10th access to a memory matters more than your 100th. This mirrors how human memory works: early repetitions build durability, later ones have diminishing returns.
**Importance Floor**: Memories marked as high importance never fully decay. Even untouched, they maintain minimum accessibility. This protects foundational knowledge from fading away.
What This Means [#what-this-means]
* Active knowledge stays fresh
* Old memories don't disappear, they just rank lower when equally relevant
* Important knowledge persists regardless of access patterns
* The system learns from your behavior automatically
Temporal Understanding [#temporal-understanding]
Nowledge Mem understands two kinds of time.
Event Time vs Record Time [#event-time-vs-record-time]
**Event time** is when something actually happened:
* "The 2020 product launch"
* "Last quarter's decisions"
* "Before we migrated"
**Record time** is when you saved the memory. You might record a memory today about an event from 2020.
This matters for queries like "recent memories about 2020 events": things you saved recently (record time) about events from 2020 (event time).
Temporal Intent Detection [#temporal-intent-detection]
Temporal intent detection requires deep mode search. In fast mode, temporal references are matched by keywords only.
In deep mode, the system interprets temporal references:
| Query | Understanding |
| ---------------------------- | --------------------------- |
| "Decisions from 2023" | Event time: 2023 |
| "Recent memories" | Record time: recent |
| "Recent memories about 2020" | Event: 2020, Record: recent |
| "Before the migration" | Event: before that event |
Fuzzy references like "last quarter," "around 2020," or "early this year" are translated into meaningful filters.
Date Precision [#date-precision]
When you save a memory about "early 2020," the system:
1. Normalizes to a searchable date (2020-01-01)
2. Tracks precision level (year, month, or day)
3. Preserves original meaning for accurate matching
This lets "memories from 2020" (year precision) work differently from "memories from January 2020" (month precision).
Feedback Loop [#feedback-loop]
Your usage patterns continuously improve search relevance.
What We Track [#what-we-track]
| Signal | What It Captures |
| --------------- | -------------------------------------- |
| **Appearances** | How often a memory shows in results |
| **Clicks** | When you open a memory to view details |
| **Dwell time** | How long you spend reading |
How It Improves Search [#how-it-improves-search]
* High click-through rate indicates the memory is genuinely useful
* Long dwell time suggests valuable content
* Frequent appearances without clicks may indicate declining relevance
No action required. Relevance improves with normal use.
Graph-Powered Discovery [#graph-powered-discovery]
The knowledge graph enables discovery through entity and topic connections.
How Memories Connect [#how-memories-connect]
Each memory can link to:
* **Entities**: People, concepts, technologies, places mentioned
* **Other memories**: Through shared entities or relationships
* **Communities**: Graph Analysis detected topic clusters
Search Through Connections [#search-through-connections]
**Entity-mediated**: Find memories about "database optimization" even when tagged differently, through shared entities like PostgreSQL or indexing.
**Community-mediated**: A search about "authentication" might surface memories from your "Security Practices" community.
**Graph expansion**: Start from one memory and explore connected knowledge.
Search Modes [#search-modes]
Two modes, available across all interfaces:
Fast Mode [#fast-mode]
* Under 100ms typical response
* Direct semantic and keyword matching
* Entity and community search without language model analysis
* Best for quick lookups
Deep Mode [#deep-mode]
* Full language model analysis
* **Temporal intent detection** (e.g., "recently working on; social events in last decade")
* Query expansion for better recall
* Context-aware strategy weighting
* Better for exploratory searches
Both modes work in main search, global launcher, and API.
Result Transparency [#result-transparency]
Every result shows why it ranked where it did.
Search Query Details [#search-query-details]
After each search, you can view detailed analysis of how your query was interpreted:
* Which search strategies were used
* Temporal intent detection results (in deep mode)
* Query expansion and entity extraction
Score Breakdown [#score-breakdown]
Hover over any result's score to see a breakdown of how it was calculated:
* **Semantic score**: How well the content matches your query
* **Decay score**: Freshness based on recency and frequency
* **Temporal boost**: Event time relevance (when applicable)
* **Graph signals**: Entity and community connections
This makes it clear how usage patterns influence ranking and why certain memories appear for specific queries.
# Linux Server Deployment (/docs/server-deployment)
import { Step, Steps } from 'fumadocs-ui/components/steps';
import { Tab, Tabs } from 'fumadocs-ui/components/tabs';
Nowledge Mem can run as a **headless server** on Linux machines without a GUI. Install the same `.deb` or `.AppImage` package, then manage everything from the command line.
Background intelligence features (daily briefings, insight detection, knowledge graph enrichment) require a [Pro license](/pricing). The server itself runs on the free tier with a 20-memory limit.
System Requirements [#system-requirements]
| Requirement | Specification |
| -------------------- | ------------------------------------------------------------------------------- |
| **Operating System** | Ubuntu 22.04+, Debian 12+, RHEL 9+, or compatible |
| **Architecture** | x86\_64 |
| **Memory (RAM)** | 8 GiB minimum (16 GiB recommended) |
| **Disk Space** | 10 GiB available |
| **Dependencies** | `libgtk-3-0`, `libwebkit2gtk-4.1-0`, `zstd` (installed automatically by `.deb`) |
Installation [#installation]
```bash
# Install the package
sudo dpkg -i nowledge-mem_*.deb
# Fix any missing dependencies
sudo apt-get install -f
```
The `.deb` post-install script automatically:
* Extracts the bundled Python runtime
* Creates the `nmem` CLI at `/usr/local/bin/nmem`
* Sets up the desktop entry (ignored on headless servers)
```bash
# Make executable
chmod +x Nowledge_Mem_*.AppImage
# Run once to extract the Python runtime
./Nowledge_Mem_*.AppImage --appimage-extract
# The nmem CLI is available after first run
# Location: ~/.local/bin/nmem
```
Verify the CLI is available:
```bash
nmem --version
```
Quick Start [#quick-start]
Start the Server [#start-the-server]
```bash
nmem serve
```
This runs the server **in the foreground** (press Ctrl+C to stop). The server starts on `0.0.0.0:14242` by default. Customize with flags:
```bash
nmem serve --host 127.0.0.1 --port 8080
```
For production, use `nmem service install` instead. It sets up a **background systemd service** that starts on boot. See [Running as a systemd Service](#running-as-a-systemd-service) below.
Activate Your License [#activate-your-license]
```bash
nmem license activate
nmem license status # Verify activation
```
Download the Embedding Model [#download-the-embedding-model]
```bash
nmem models download
nmem models status # Verify installation
```
This downloads the embedding model for hybrid search (\~500 MB). Only needed once.
Configure the LLM Provider [#configure-the-llm-provider]
A remote LLM is required on Linux (no on-device LLM support):
```bash
nmem config provider set anthropic \
--api-key sk-ant-xxx \
--model claude-sonnet-4-20250514
nmem config provider test # Verify connection
```
Supported providers: `anthropic`, `openai`, `ollama`, `openrouter`, and OpenAI-compatible endpoints.
Enable Background Intelligence [#enable-background-intelligence]
```bash
nmem config settings set backgroundIntelligence true
nmem config settings set autoDailyBriefing true
```
Verify Everything [#verify-everything]
```bash
nmem status
```
Running as a systemd Service [#running-as-a-systemd-service]
For production deployments, use `nmem service install` to set up a background systemd service that automatically starts on boot:
```bash
# Install, enable, and start
sudo nmem service install
# Custom host/port
sudo nmem service install --host 0.0.0.0 --port 8080
```
```bash
# No root required
nmem service install --user
```
Managing the Service [#managing-the-service]
```bash
nmem service status # Show service status
nmem service logs -f # Follow service logs
nmem service stop # Stop the service
nmem service start # Start the service
nmem service uninstall # Stop, disable, and remove
```
Add `--user` to any `nmem service` command if you installed a user-level service.
serve vs service [#serve-vs-service]
| | `nmem serve` | `nmem service install` |
| ------------------ | ----------------------------- | -------------------------------------- |
| **Runs in** | Foreground (current terminal) | Background (systemd) |
| **Stops when** | Ctrl+C or terminal closes | `nmem service stop` or system shutdown |
| **Starts on boot** | No | Yes (auto-enabled) |
| **Best for** | Testing, development | Production deployments |
Remote Access [#remote-access]
By default, the server listens on all interfaces (`0.0.0.0`). To access from other machines:
```bash
# From a remote machine with nmem-cli installed
export NMEM_API_URL=http://your-server:14242
nmem status
nmem m search "query"
```
Install the standalone CLI on remote machines:
```bash
pip install nmem-cli
# or
uv pip install nmem-cli
```
The server does not include authentication. For production use, restrict access via firewall rules or bind to `127.0.0.1` and use SSH tunneling or a reverse proxy with authentication.
Interactive TUI [#interactive-tui]
For an interactive terminal experience, use the TUI:
```bash
nmem tui
```
The TUI provides a full settings management interface including license activation, LLM configuration, and knowledge processing toggles.
Configuration Reference [#configuration-reference]
Environment Variables [#environment-variables]
| Variable | Default | Description |
| ----------------------- | ------------------------ | --------------------------- |
| `NMEM_API_URL` | `http://127.0.0.1:14242` | Server URL for CLI commands |
| `NOWLEDGE_DB_PATH` | Auto-detected | Override database location |
| `NOWLEDGE_BACKEND_HOST` | `0.0.0.0` | Server bind address |
CLI Commands Summary [#cli-commands-summary]
| Command | Description |
| -------------------------------------------- | -------------------------------------- |
| `nmem serve` | Start the server in the foreground |
| `nmem service install` | Install and start as a systemd service |
| `nmem service status` | Show systemd service status |
| `nmem service logs -f` | Follow service logs |
| `nmem service stop` / `start` | Stop or start the service |
| `nmem service uninstall` | Remove the systemd service |
| `nmem status` | Check server health |
| `nmem license activate ` | Activate license |
| `nmem models download` | Download embedding model |
| `nmem config provider set
--api-key ` | Configure LLM provider |
| `nmem config provider test` | Test LLM connection |
| `nmem config settings` | Show processing settings |
| `nmem config settings set ` | Update a setting |
| `nmem tui` | Interactive terminal UI |
Next Steps [#next-steps]
* **[CLI Reference](/docs/cli)** - Complete CLI documentation
* **[API Reference](/docs/api)** - REST API endpoints
* **[Integrations](/docs/integrations)** - Connect with AI tools
# Troubleshooting (/docs/troubleshooting)
import { Button } from "@/components/ui/button"
import { Loader2, Trash2, AlertTriangle, Lightbulb, MessageSquare } from "lucide-react"
import { Card, CardContent } from "@/components/ui/card"
import { formatSize } from "@/lib/utils"
import { Github } from "@lobehub/icons"
import { Tabs, Tab, TabsList, TabTrigger, TabContent } from "fumadocs-ui/components/tabs"
export const ClearCacheButton = () => (
)
Viewing Logs [#viewing-logs]
On macOS, the system log file is located at `~/Library/Logs/Nowledge\ Graph/app.log`.
You can view it by running this command in your terminal:
```bash
open -a Console ~/Library/Logs/Nowledge\ Graph/app.log
```
On Windows, the system log file is located on two possible locations based on the installation method:
* `%LOCALAPPDATA%\Packages\NowledgeLabsLLC.NowledgeMem_1070t6ne485wp\logs\app.log` (installed from Microsoft Store)
* `%LOCALAPPDATA%\NowledgeGraph\logs\app.log` (installed from package file downloaded from Nowledge Mem website)
You can view it by pasting this into address bar of File Explorer:
```shell
%LOCALAPPDATA%\Packages\NowledgeLabsLLC.NowledgeMem_1070t6ne485wp\logs\app.log
```
or this:
```shell
%LOCALAPPDATA%\NowledgeGraph\logs\app.log
```
App Takes Too Long to Start [#app-takes-too-long-to-start]
**Symptom:** The app hangs or shows a timeout error during startup.
**Solution:** Global proxies or VPN software can prevent the app from accessing `http://127.0.0.1:14242` directly.
Configure your proxy or VPN tool to bypass localhost addresses. Add the following to your bypass/exclusion rules:
```
127.0.0.1, localhost, ::1
```
This allows you to keep your proxy/VPN enabled while ensuring Nowledge Mem can communicate with its local server. After updating the bypass rules, restart Nowledge Mem.
AI Now Session Fails to Start [#ai-now-session-fails-to-start]
**Symptom:** Clicking **New Task** or resuming a paused task fails, and AI Now cannot open a session.
**What to do first:** Check the startup diagnostics card shown in AI Now.
When startup fails, AI Now now shows a diagnostics card with:
* failure stage (`spawn`, `initialize`, or `new_session`)
* platform and process exit code
* recent `stderr` output from the startup script
* a copy button for sharing diagnostics
Click **Details** to expand technical fields, then click **Copy diagnostics** for support or issue reports.
**Common fixes (especially on Windows):**
1. Verify your installation is complete (embedded Python and startup scripts are present).
2. Restart Nowledge Mem after plugin or model configuration changes.
3. Temporarily disable antivirus/quarantine rules that may block bundled Python or PowerShell startup.
4. If a plugin is involved, reconnect expired OAuth plugins in **AI Now → Plugins** and retry.
If it still fails, include copied diagnostics plus `app.log` when reporting the issue.
Corrupted Model Cache [#corrupted-model-cache]
**Symptom:** Search, memory distillation, or knowledge extraction features stop working unexpectedly.
**Solution:** Clear the model cache and re-download the models.
Navigate to Settings → Models, and click:
After clearing the cache, re-download the required models.
CLI Not Found [#cli-not-found]
**Symptom:** Running `nmem` in terminal returns "command not found".
**Solutions by platform:**
* **macOS**: Install the CLI from **Settings → Preferences → Developer Tools**
* **Windows**: Open a **new** terminal window after app installation (the PATH update requires a fresh session)
* **Linux**: The CLI is included with deb/rpm packages. If installed manually, ensure `/usr/local/bin` is in your PATH
**Quick check:** Run `nmem status` to verify the CLI can connect to Nowledge Mem.
Remote Access Returns 429 [#remote-access-returns-429]
**Symptom:** `nmem status` or `curl` returns `429 Too many invalid auth attempts`.
**Solution:** The client retried with an invalid API key too many times.
* Re-copy URL + key from **Settings → Access Mem Anywhere**
* Ensure `NMEM_API_KEY` is the exact value (no extra spaces/quotes)
* If unsure, click **Rotate** to issue a new key
Full setup and validation steps: [Access Mem Anywhere](/docs/remote-access).
Remote Access Returns 401 Missing API key [#remote-access-returns-401-missing-api-key]
**Symptom:** Tunnel URL is reachable, but `nmem status` or `curl` returns `401 Missing API key`.
**Cause:** Some network proxies remove auth headers.
**Fix:**
* Update to latest `nmem` (it retries with proxy-safe fallback automatically)
* Re-copy URL + key from **Settings → Access Mem Anywhere**
* For manual `curl`, verify with:
`curl "$NMEM_API_URL/health?nmem_api_key=$NMEM_API_KEY"`
Report Issue [#report-issue]
# Try These (/docs/try-these)
import { Callout } from 'fumadocs-ui/components/callout';
Your Timeline input handles everything: questions, captures, URLs, files, scheduling. Type naturally and AI figures out the rest. Here are the queries that show what the system can really do.
These queries get more powerful as your knowledge grows. After a week of regular use, the results will surprise you.
The Queries [#the-queries]
1. Show my Working Memory briefing [#1-show-my-working-memory-briefing]
Reads your current focus surface at `~/ai-now/memory.md`. What topics are active, what needs attention, recent activity summary. Connected AI tools (Claude Code, Cursor) read this automatically.
2. Which of my ideas have evolved the most? [#2-which-of-my-ideas-have-evolved-the-most]
Finds the longest EVOLVES chains, ideas that went through multiple revisions. Tells the story chronologically: "In January you decided on PostgreSQL. By March, you were considering a hybrid approach. Your latest note confirms the dual-database migration."
3. What wisdom has crystallized from my notes? [#3-what-wisdom-has-crystallized-from-my-notes]
Shows synthesized "crystals", reference articles the system distilled from multiple related memories overnight. These are the insights you couldn't get from any single note.
4. Summarize my recent coding conversations [#4-summarize-my-recent-coding-conversations]
If you use Claude Code, Cursor, or Codex, your sessions auto-sync. This lists and summarizes your latest coding sessions: what was discussed, what was built, what decisions were made.
5. Just decided to use PostgreSQL for the main database [#5-just-decided-to-use-postgresql-for-the-main-database]
Knowledge capture. The system saves it as a memory, searches for related decisions, and mentions connections: "This relates to your earlier note about database scaling." Just type naturally, the AI classifies what you share and stores it.
6. Save https://example.com/interesting-article [#6-save-httpsexamplecominteresting-article]
Paste a URL and the system fetches, parses, and indexes the content. AI reads the page and stores a substantive summary as a memory. The URL and its content become searchable. Add a note before the URL and AI captures both.
7. Tonight, run knowledge graph extraction on my recent memories [#7-tonight-run-knowledge-graph-extraction-on-my-recent-memories]
Schedule a background Knowledge Agent task. The agent fires at the specified time with full tool access: it can analyze memories, detect contradictions, create EVOLVES links, or produce crystals. Natural language timing: "in 2 hours", "tomorrow morning", "next week". Min 5 minutes, max 30 days.
8. Search my documents for [topic] [#8-search-my-documents-for-topic]
Full-text search across all source documents in your Library. Drop files (PDF, Word, markdown) onto the Timeline input or add them through the Library. They get parsed, chunked, and indexed for semantic search.
9. What are my main knowledge themes? [#9-what-are-my-main-knowledge-themes]
**Note**: This requires a week of regular use and background processing.
Community detection clusters your entities into topic areas with AI summaries. The system runs overnight analysis to group related concepts. You'll see themes you never consciously tracked: a "developer experience" cluster you didn't know existed, or a "data architecture" theme threading through months of notes.
The Compound Effect [#the-compound-effect]
These queries get more powerful over time:
* **Week 1**: Basic search works. Communities are small or empty.
* **Month 1**: Evolution chains appear. Crystals start forming. Themes emerge.
* **Month 3**: Cross-domain connections surprise you. Daily briefings are genuinely useful.
* **Month 6**: The system knows your expertise better than you can articulate it.
Next Steps [#next-steps]
* [Getting Started](/docs/getting-started): Set up in five minutes
* [See Your Expertise](/docs/use-cases/expertise-graph): Explore the knowledge graph visually
* [Background Intelligence](/docs/advanced-features): How the system learns overnight
# Using Nowledge Mem (/docs/usage)
import VideoPlayer from "@/components/ui/video-player"
import { Step, Steps } from 'fumadocs-ui/components/steps';
The Timeline [#the-timeline]
The Timeline is your home screen. Everything lives here: what you capture, what you ask, what the system discovers on its own.
Type into the input at the top. AI figures out what you meant and acts. A thought becomes a memory. A question gets answered from your knowledge. A URL gets fetched and indexed. A file gets parsed.
What You'll See [#what-youll-see]
| Item | What it is |
| ------------------ | ------------------------------------------------------------- |
| **Capture** | A memory you saved, with auto-generated title and tags |
| **Question** | Your question and the AI's answer, drawn from your knowledge |
| **URL Capture** | A web page fetched, parsed, and stored |
| **Insight** | A connection the system discovered between your memories |
| **Crystal** | A synthesized summary of multiple related memories |
| **Flag** | A contradiction, stale info, or claim that needs verification |
| **Working Memory** | Your daily morning briefing |
Your AI Tools [#your-ai-tools]
Connect any AI tool to your knowledge. Claude Code, Cursor, Codex, OpenCode, Alma, DeepChat, LobeHub, or whatever you switch to next.
**Without Mem:** *"Help me implement caching for the API."* Your agent asks about your stack, your infrastructure, your preferences. You explain everything from scratch.
**With Mem:** *"Help me implement caching for the API."* Your agent searches your knowledge, finds your Redis decision from last month and your API rate limiting patterns, and writes code that fits your architecture. No setup questions.
This happens without prompting. The tool recognizes it has access to your knowledge and uses it when relevant.
Save an insight in Claude Code today. Cursor finds it tomorrow when it encounters the same topic. No copying, no exporting.
You can also query directly: *"What did I decide about database migrations last month?"* Your agent searches your knowledge to answer.
See [Integrations](/docs/integrations) for setup instructions.
Search [#search]
In the App [#in-the-app]
Open memory search with Cmd + K (macOS). Search understands meaning, not just keywords. Searching "design patterns" finds memories about "architectural approaches."
Three search modes work together:
* **Semantic** finds memories by meaning
* **Keyword** does exact match for specific terms
* **Graph** discovers memories through entity connections and topic clusters
From Anywhere [#from-anywhere]
Press Cmd + Shift + K from any application to search without opening Nowledge Mem. Copy results directly where you need them. The [Raycast extension](/docs/integrations#raycast) brings the same search into your launcher.
AI Now [#ai-now]
AI Now is a personal AI agent running on your machine. It has your full knowledge base, your connected notes, and the web. Purpose-built capabilities — not just chat:
* **Deep research** that searches your memories and the web in parallel, then synthesizes
* **File analysis** that understands your spreadsheets in context — "what changed from last quarter" works because it knows last quarter
* **Presentations** with live preview and PowerPoint export
* **Plugins** for Obsidian, Notion, Apple Notes, and any MCP service
When you ask about caching, it already knows your Redis decision from last month. When you analyze data, it connects the numbers to your goals and history. Every capability draws from what you know.
AI Now requires a remote LLM. See [AI Now](/docs/ai-now) for the full guide.
Command Line [#command-line]
The `nmem` CLI gives full access from any terminal:
```bash
# Search your memories
nmem m search "authentication patterns"
# Add a memory
nmem m add "We chose JWT with 24h expiry for the auth service"
# JSON output for scripting
nmem --json m search "API design" | jq '.memories[0].content'
```
See the [CLI reference](/docs/cli) for the complete command set.
Remote LLMs [#remote-llms]
By default, everything runs locally. No internet required. As your knowledge base grows, a remote LLM gives you stronger processing.
Remote LLM configuration requires a [Pro license](/docs/mem-pro).
**What it unlocks:**
* **Background Intelligence**: automatic connections, crystals, insights, and daily briefings
* Faster knowledge graph extraction
* More nuanced semantic understanding
* AI Now agent capabilities
**Privacy:** your data is sent only to the LLM provider you choose. Never to Nowledge Mem servers. Switch back to local-only at any time.
Go to **Settings > Remote LLM**
Toggle **Remote** to enable
Select your LLM provider and enter your API key
Test the connection, select a model, and save
Next Steps [#next-steps]
* **[AI Now](/docs/ai-now)**: Deep research and analysis powered by your knowledge
* **[Background Intelligence](/docs/advanced-features)**: Knowledge graph, insights, crystals, working memory
* **[Integrations](/docs/integrations)**: Connect your AI tools
# Nowledge Mem API (/docs/api)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
# Integrations (/docs/integrations)
import VideoPlayer from "@/components/ui/video-player"
import { McpServerView } from "@/components/docs/mcp"
import { BrowserExtensionGuide } from "@/components/docs/browser-extension-guide"
import { FileImportGuide } from "@/components/docs/file-import"
import { InlineTOC } from 'fumadocs-ui/components/inline-toc';
import { Step, Steps } from 'fumadocs-ui/components/steps';
import { Button } from '@/components/ui/button';
import { Download } from 'lucide-react';
import { CodeXml } from 'lucide-react';
import { Files } from 'lucide-react';
import { Braces } from 'lucide-react';
import { FileText } from 'lucide-react';
Nowledge Mem connects to whatever tools you use today, and whatever you'll switch to tomorrow. Your knowledge stays in one place; the tools come and go.
Quick Start (One Command) [#quick-start-one-command]
For Claude Code, Cursor, Codex, OpenCode, OpenClaw, Alma, and 20+ agents:
```bash
npx skills add nowledge-co/community/nowledge-mem-npx-skills
```
This installs four skills: **search-memory**, **read-working-memory**, **save-thread**, and **distill-memory**. After setup, your agent reads context at session start, searches knowledge when relevant, and saves findings as it works.
| I want to... | Use |
| ----------------------------------------------------------------------- | -------------------------------------------------------------------------------- |
| Use Nowledge Mem with **Claude Code, Codex, Cursor, OpenCode, or Alma** | npx skills (above) or [tool-specific setup](#claude-code) / [Alma plugin](#alma) |
| Use Nowledge Mem with **OpenClaw** | [OpenClaw in 5 Minutes](/docs/integrations/openclaw) |
| Search memories from **Raycast** | [Raycast extension](#raycast) |
| Capture memories from **ChatGPT, Claude, Gemini**, and 13+ AI platforms | [Browser extension](#browser-extension) (auto or manual) |
| Access Mem from **any machine over internet** | [Access Mem Anywhere guide](/docs/remote-access) |
| Build **custom integrations** | [REST API](#api-integration) or [CLI](#command-line-interface-cli) |
Model Context Protocol (MCP) [#model-context-protocol-mcp]
MCP is the protocol AI agents use to interact with Nowledge Mem. The npx skills above use MCP under the hood. For tools that need manual configuration, see below.
Two Integration Paths [#two-integration-paths]
| Path | Apps | Setup | Autonomous Behavior |
| -------------------- | ------------------------------------------------------------------------------------------------ | ---------------------------------- | ------------------------------------ |
| **Skill-Compatible** | Claude Code, Codex, Cursor, OpenCode, [OpenClaw](https://openclaw.ai), [Alma](https://alma.now/) | `npx skills add` or install plugin | Built-in triggers, no prompts needed |
| **MCP-Only** | Claude Desktop, Cursor, ChatWise, etc. | Configure MCP + system prompts | Requires system prompts for autonomy |
**Skill-compatible apps** (Claude Code, Codex, Cursor, OpenCode, OpenClaw, Alma): The npx skills command above is the fastest. Or jump to [Claude Code](#claude-code) / [Codex CLI](#codex-cli) / [Alma](#alma) for tool-specific setup.
**MCP-only apps**: Continue below to configure MCP and add system prompts for autonomous behavior.
MCP Capabilities [#mcp-capabilities]
* **Search memories**: `memory_search`
* **Read Working Memory**: `read_working_memory`
* **Add memories**: `memory_add`
* **Update memories**: `memory_update`
* **List memory labels**: `list_memory_labels`
* **Save/Import threads**: `thread_persist`
* **Prompts**: `sum` (summarize to memory), `save` (checkpoint thread)
MCP Server Configuration [#mcp-server-configuration]
System Prompts for Autonomous Behavior [#system-prompts-for-autonomous-behavior]
For MCP-only apps to act autonomously, add these instructions to your agent's system prompt or CLAUDE.md/AGENTS.md:
```markdown
## Nowledge Mem Integration
You have access to Nowledge Mem for knowledge management. Use these tools proactively:
**At Session Start (`read_working_memory`):**
- Read ~/ai-now/memory.md for today's briefing
- Understand user's active focus areas, priorities, and unresolved flags
- Reference this context naturally when it connects to the current task
**When to Search (`memory_search`):**
- Current topic connects to prior work
- Problem resembles past solved issue
- User asks about previous decisions ("why did we choose X?")
- Complex debugging that may match past root causes
**When to Save Memories (`memory_add`):**
- After solving complex problems or debugging
- When important decisions are made with rationale
- After discovering key insights ("aha" moments)
- When documenting procedures or workflows
- Skip: routine fixes, work in progress, generic Q&A
**Memory Categories (use as labels):**
- insight: Key learnings, realizations
- decision: Choices with rationale and trade-offs
- fact: Important information, data points
- procedure: How-to knowledge, workflows
- experience: Events, conversations, outcomes
**Memory Quality:**
- Atomic and actionable (not vague)
- Standalone context (readable without conversation)
- Focus on "what was learned" not "what was discussed"
**Importance Scale (0.1-1.0):**
- 0.8-1.0: Critical decisions, breakthroughs
- 0.5-0.7: Useful insights, standard decisions
- 0.1-0.4: Background info, minor details
**When to Save Threads (`thread_persist`):**
- Only when user explicitly requests ("save this session")
- Never auto-save without asking
```
This enables autonomous memory operations in Claude Desktop, Cursor, ChatWise, and other MCP-only apps.
Browser Extension [#browser-extension]
Nowledge Mem Exchange captures memories from AI conversations on ChatGPT, Claude, Gemini, and 13+ platforms. It runs in a native Chrome SidePanel alongside your conversations.
Smart Distill [#smart-distill]
Auto-capture evaluates each conversation turn and saves what matters. Configure your preferred LLM provider and let the extension work autonomously.
Three Ways to Capture [#three-ways-to-capture]
| Mode | How it works | When to use |
| ------------------ | -------------------------------------------------------------------- | -------------------------------------------------------------------- |
| **Auto-Capture** | Monitors your conversations and autonomously saves valuable insights | Set it and forget it. The extension decides what's worth remembering |
| **Manual Distill** | You trigger capture on a conversation you want to save | When you know a conversation contains something important |
| **Thread Backup** | Imports the full conversation as a thread, with incremental dedup | Archive entire conversations for later distillation in the app |
Auto-Capture [#auto-capture]
When enabled, the extension monitors conversations and applies strict criteria to decide what's worth saving:
* **Refined conclusions**: decisions, plans, finalized approaches
* **Important discoveries**: breakthroughs, key findings
* **Knowledge explorations**: deep dives, research synthesis
Routine Q\&A and generic exchanges are skipped. The extension checks for duplicates before saving and can update existing memories instead of creating new ones.
Auto-capture requires a configured LLM provider. Open the SidePanel, go to **Settings**, and add your API key. Supported providers: OpenAI, Anthropic, Google, xAI, OpenRouter, Ollama, and OpenAI-compatible endpoints.
Thread Backup [#thread-backup]
Imports the full conversation as a thread. Subsequent backups only capture new messages (incremental sync). Once imported, trigger Memory Distillation from the app to extract individual memories.
For local coding assistants, Nowledge Mem also supports **AI Conversation Discovery (auto-sync)** with incremental updates for **Claude Code, Cursor, Codex, and OpenCode**.
Supported Platforms [#supported-platforms]
The extension works with all major AI chat services:
| Platform | Sites |
| -------------- | -------------------------- |
| **ChatGPT** | openai.com, chatgpt.com |
| **Claude** | claude.ai |
| **Gemini** | gemini.google.com |
| **Perplexity** | perplexity.ai |
| **DeepSeek** | chat.deepseek.com |
| **Kimi** | kimi.moonshot.cn |
| **Qwen** | qwen.ai, tongyi.aliyun.com |
| **POE** | poe.com |
| **Manus** | manus.im |
| **Grok** | grok.com, grok.x.ai, x.ai |
| **Open WebUI** | localhost, private IPs |
| **ChatGLM** | chatglm.cn |
| **MiniMax** | agent.minimaxi.com |
Pro users with a configured LLM can auto-generate handlers for any AI chat site. Navigate to the site, open the SidePanel, and click **Generate handler**. The extension analyzes the page structure and creates a custom handler automatically.
Connect Extension to Access Mem Anywhere [#connect-extension-to-access-mem-anywhere]
If your Mem API is exposed through **Settings → Access Mem Anywhere** in the desktop app:
1. Open any supported AI chat page, then open the extension SidePanel
2. Click **Settings**
3. In **Access Mem Anywhere**, paste:
* `export NMEM_API_URL="https://"`
* `export NMEM_API_KEY="nmem_..."`
4. Click **Fill URL + key**
5. Click **Save**, then **Test connection**
Full guide (Quick link and Cloudflare account modes): [Access Mem Anywhere](/docs/remote-access).
Download [#download]
The extension also supports downloading any conversation thread as a `.md` file for archiving or sharing.
} title="MD Format Reference">
Example conversation file in MD format
Thread File Import [#thread-file-import]
Import conversations from your favorite AI tools by uploading exported conversation files directly into Nowledge Mem.
AI Conversation Discovery (Auto-Sync) [#ai-conversation-discovery-auto-sync]
Find and import local coding-assistant conversations directly from the app:
| Client | Sync Mode | Where |
| --------------- | --------------------------------- | ---------------------------------------- |
| **Claude Code** | Auto-discovery + incremental sync | Threads → Import → Find AI Conversations |
| **Cursor** | Auto-discovery + incremental sync | Threads → Import → Find AI Conversations |
| **Codex** | Auto-discovery + incremental sync | Threads → Import → Find AI Conversations |
| **OpenCode** | Auto-discovery + incremental sync | Threads → Import → Find AI Conversations |
Bulk Import (Multiple Threads at Once) [#bulk-import-multiple-threads-at-once]
For users with large conversation histories, Nowledge Mem supports bulk importing all your conversations from a single export file:
| Source | File Format | How to Export |
| ------------ | ---------------------------- | -------------------------------------- |
| **ChatGPT** | `chat.html` | Settings → Data controls → Export data |
| **ChatWise** | `.zip` (contains JSON files) | Export all chats from ChatWise app |
Single Thread Import [#single-thread-import]
For importing individual conversations:
| Source | File Format | Notes |
| ------------ | ----------- | --------------------------------------- |
| **Cursor** | `.md` | Export conversation from Cursor |
| **ChatWise** | `.html` | Single chat HTML export |
| **Generic** | `.md` | Any markdown with user/assistant format |
For developers building custom import tools:
* **Thread API**: create threads programmatically from your tool ([API reference](https://mem.nowledge.co/docs/api/threads/post))
* **Markdown format**: convert conversations to an importable `.md` file ([format reference](https://github.com/nowledge-co/nowledge-mem/blob/main/refs/nowledge_mem_exchange/example_conversation_file.md))
} title="Create Thread API">
API Docs for creating a thread in Nowledge Mem
} title="MD Format Reference">
Example conversation file in MD format
Tight Integrations [#tight-integrations]
**DeepChat** and **LobeHub** include Nowledge Mem as a built-in integration.
Claude Desktop [#claude-desktop]
One-click extension for Claude Desktop.
Download Extension
Install Extension
Ensure Python 3.13 is installed on your system.
Open **Terminal.app** and run the following commands:
```bash
which brew || /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
python3.13 --version || /opt/homebrew/bin/brew install python@3.13
```
1. Double-click the downloaded `claude-dxt.mcpb` file from your browser's download area
2. Click the **Install** button in the installation dialog
3. Restart the Claude Desktop App
You can now ask Claude to save insights to Nowledge Mem, update existing memories, or search your knowledge base anytime during conversations.
> Note, if you failed to enable Mem in Claude Desktop, check logs via `tail -n 20 -F ~/Library/Logs/Claude/mcp*.log` and share with us.
Claude Code [#claude-code]
Claude Code supports skills - install the plugin for built-in autonomous behavior. No system prompts or MCP configuration needed.
The CLI-based plugin includes skills that:
* Search your knowledge base when relevant context exists
* Suggest distillation at breakthrough moments
* Save sessions on explicit request
Install the Claude Code plugin
Install the Nowledge Mem plugin for autonomous search, save, and checkpoint behavior. The plugin uses the `nmem` CLI. See: [Claude Code plugins](https://docs.claude.com/en/docs/claude-code/plugins).
```bash
# Add the Nowledge community marketplace
claude plugin marketplace add nowledge-co/community
# Install the Nowledge Mem plugin
claude plugin install nowledge-mem@nowledge-community
```
**Prerequisites**: The plugin requires `nmem` CLI. Install it with:
```bash
# Option 1 (Recommended): Use uvx (no installation needed)
curl -LsSf https://astral.sh/uv/install.sh | sh
uvx --from nmem-cli nmem --version
# Option 2: Install with pip
pip install nmem-cli
```
**Note**: On Windows/Linux with Nowledge Mem Desktop app, `nmem` is bundled. On macOS or remote servers, use `uvx` or install manually.
**Update Plugin**: To get the latest version:
```bash
claude plugin marketplace update
claude plugin update nowledge-mem@nowledge-community
# Restart Claude Code to apply changes
```
Usage
Three ways to use Nowledge Mem inside a Claude Code chat:
**Slash Commands (Quick Access)**
Type these commands directly:
* `/save` - Save current session to Nowledge Mem
* `/sum` - Distill insights from this conversation
* `/search ` - Search your knowledge base
**Natural Language**
* Say "Save this session" or "Checkpoint this conversation"
* Claude will automatically run `nmem t save --from claude-code`
* Say "Distill this conversation" or "Save the key insights"
* Claude will analyze and create structured memories using `nmem m add`
**Autonomous (via Skills)**
The plugin includes four skills that work automatically:
* **Read Working Memory**: loads your daily briefing at session start and after context compaction
* **Search Memory**: searches when you reference past work
* **Distill Memory**: suggests distillation at breakthrough moments
* **Save Thread**: saves sessions on explicit request
**Lifecycle Hooks**
The plugin includes [Claude Code hooks](https://code.claude.com/docs/en/hooks) for automatic lifecycle management:
| Event | Trigger | Action |
| ------------------------ | ------------------------ | ------------------------------------------------------------------- |
| `SessionStart` (startup) | New session begins | Injects Working Memory briefing |
| `SessionStart` (compact) | After context compaction | Re-injects Working Memory and prompts Claude to checkpoint progress |
These hooks run automatically. Working Memory context is injected into Claude's context at startup and after compaction, so Claude always knows your current priorities. After compaction, Claude is prompted to save important findings via `nmem m add` before continuing.
**Autonomous Knowledge Capture**
For proactive memory management, see the complete example: **[AGENTS.md](https://github.com/nowledge-co/community/blob/main/examples/AGENTS.md)**: a memory-keeper agent using the [agents.md standard](https://agents.md/) that works with any AI coding agent.
Codex CLI [#codex-cli]
Codex supports custom prompts - install them for built-in slash commands. No MCP configuration needed.
Codex integration via `nmem` CLI and custom prompts.
**Install nmem CLI**
Install the CLI:
```bash
# Option 1 (Recommended): Use uvx (no installation needed)
curl -LsSf https://astral.sh/uv/install.sh | sh
uvx --from nmem-cli nmem --version
# Option 2: Install with pip
pip install nmem-cli
```
**Note**: On Windows/Linux with Nowledge Mem Desktop app, `nmem` is bundled. On macOS or remote servers, use `uvx` or install manually.
**Install Custom Prompts**
Install custom prompts for slash commands:
> Fresh install:
```bash
curl -fsSL https://raw.githubusercontent.com/nowledge-co/community/main/nowledge-mem-codex-prompts/install.sh | bash
```
> Update install:
```bash
curl -fsSL https://raw.githubusercontent.com/nowledge-co/community/main/nowledge-mem-codex-prompts/install.sh -o /tmp/install.sh && bash /tmp/install.sh --force && rm /tmp/install.sh
```
Usage inside a Codex chat:
**Slash Commands**
Type these commands directly:
* `/prompts:read_working_memory` - Load your daily Working Memory briefing for context
* `/prompts:save_session` - Save current session using `nmem t save --from codex`
* `/prompts:distill` - Distill insights using `nmem m add`
Or type `/` and search for "memory", "save", or "distill" to find them.
**Troubleshooting**
* **"Command not found: nmem"** → Use `uvx --from nmem-cli nmem --version` or install with `pip install nmem-cli`
* **"Command not found: uvx"** → Install uv with `curl -LsSf https://astral.sh/uv/install.sh | sh`
* **Sessions not listing** → Ensure you're in the correct project directory
DeepChat [#deepchat]
DeepChat has built-in Nowledge Mem support.
Enable MCP in DeepChat
Toggle on the switch under Settings>MCP Settings
Enable Nowledge Mem
Toggle on the nowledge-mem switch under Custom Servers
LobeHub [#lobehub]
LobeHub (formerly LobeChat) has built-in Nowledge Mem support.
One-Click Installation
Install Nowledge Mem directly in LobeHub using the one-click installation feature:
Click the Install button to install Nowledge Mem LobeHub plugin.
OpenClaw [#openclaw]
[OpenClaw](https://openclaw.ai) plugin for persistent agent memory.
Source: [community/nowledge-mem-openclaw-plugin](https://github.com/nowledge-co/community/tree/main/nowledge-mem-openclaw-plugin)
Use the dedicated setup guide:
**[OpenClaw in 5 Minutes](/docs/integrations/openclaw)**
Includes:
* correct slot-based config (`plugins.slots.memory = "openclaw-nowledge-mem"`)
* install and verification commands
* optional lifecycle capture setup
* local-first regression validation workflow
Alma [#alma]
[Alma](https://alma.now/) plugin for persistent memory workflows.
Source: [community/nowledge-mem-alma-plugin](https://github.com/nowledge-co/community/tree/main/nowledge-mem-alma-plugin)
Clone the plugin, install dependencies, and copy it into Alma's local plugin directory
```bash
git clone https://github.com/nowledge-co/community.git
cd community/nowledge-mem-alma-plugin
npm install
mkdir -p ~/.config/alma/plugins/nowledge-mem
cp -R . ~/.config/alma/plugins/nowledge-mem
```
Restart Alma
**What the plugin provides:**
* **Tool suite**: memory query/search/store/show/update/delete + thread search/show/create/delete + Working Memory
* **Command palette actions**: status, search, save memory, read Working Memory, save current thread
* **Auto-recall hook**: injects Working Memory + relevant memories on first outgoing message in each thread
* **Optional auto-capture hook**: saves current thread on app quit
* **Local-first runtime**: uses `nmem` CLI (fallback `uvx --from nmem-cli nmem`)
Raycast [#raycast]
[Raycast](https://raycast.com) extension with four commands:
Source: [community/nowledge-mem-raycast](https://github.com/nowledge-co/community/tree/main/nowledge-mem-raycast)
| Command | What it does |
| ----------------------- | ----------------------------------------------------------------------------- |
| **Search Memories** | Semantic search with relevance scores, copy content or title from any result |
| **Add Memory** | Save a memory with title, content, and importance |
| **Working Memory** | View your daily briefing |
| **Edit Working Memory** | Edit `~/ai-now/memory.md` inline, changes respected by all connected AI tools |
**Raycast Store** (coming soon): Once [our Store submission](https://github.com/raycast/extensions/pull/25451) is merged, search "Nowledge Mem" in the Raycast Store to install.
**Install from source** (available now):
```bash
git clone https://github.com/nowledge-co/community.git
cd community/nowledge-mem-raycast
npm install && npm run dev
```
Requires Nowledge Mem running locally. The extension calls the HTTP API at `localhost:14242` for search and memory creation, and reads `~/ai-now/memory.md` for Working Memory.
LLM-Friendly Documentation [#llm-friendly-documentation]
Every page on this docs site is available as clean Markdown for AI agents and LLMs. Request any docs URL with the `Accept: text/markdown` header and you get Markdown instead of HTML:
```bash
# Fetch any docs page as Markdown
curl -H "Accept: text/markdown" https://mem.nowledge.co/docs
curl -H "Accept: text/markdown" https://mem.nowledge.co/docs/getting-started
curl -H "Accept: text/markdown" https://mem.nowledge.co/docs/integrations
```
Dedicated endpoints are also available:
| Endpoint | What it returns |
| --------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------- |
| [`/llms-full.txt`](https://mem.nowledge.co/llms-full.txt) | All documentation pages concatenated into one file |
| `/llms.mdx/docs/` | A single page as Markdown (e.g. [`/llms.mdx/docs/getting-started`](https://mem.nowledge.co/llms.mdx/docs/getting-started)) |
No authentication required.
API Integration [#api-integration]
RESTful API for programmatic access.
} href="/docs/api" title="API Reference">
Nowledge Mem RESTful API Documentation.
} title="OpenAPI Spec">
openapi.json
Command Line Interface (CLI) [#command-line-interface-cli]
The `nmem` CLI provides terminal access to your knowledge base.
Installation [#installation]
| Platform | Installation |
| ----------- | ------------------------------------------------------ |
| **macOS** | Settings → Preferences → Developer Tools → Install CLI |
| **Windows** | Automatically installed with the app |
| **Linux** | Included with deb/rpm packages |
Quick Start [#quick-start]
```bash
# Check connection
nmem status
# Search memories
nmem m search "project notes"
# List recent memories
nmem m
# Create a memory
nmem m add "Important insight" --title "Project Learnings"
# Search threads
nmem t search "architecture"
# Save Claude Code/Codex sessions via CLI
nmem t save --from claude-code
nmem t save --from codex -s "Summary of what was accomplished"
# Create a thread from content
nmem t create -t "Session Notes" -c "Key discussion points..."
# Create a thread from file
nmem t create -t "Meeting Notes" -f notes.md
```
AI Agent Integration [#ai-agent-integration]
The CLI is designed for AI agent workflows with JSON output:
```bash
# Get JSON output for parsing
nmem --json m search "API design"
# Chain commands
ID=$(nmem --json m add "Note" | jq -r '.id')
nmem --json m update "$ID" --importance 0.9
# Multi-message thread creation
nmem t create -t "Session" -m '[{"role":"user","content":"Q"},{"role":"assistant","content":"A"}]'
```
Command Reference [#command-reference]
| Command | Alias | Description |
| --------------- | -------- | ----------------------- |
| `nmem status` | | Check server connection |
| `nmem stats` | | Database statistics |
| `nmem memories` | `nmem m` | Memory operations |
| `nmem threads` | `nmem t` | Thread operations |
For complete CLI documentation, run `nmem --help` or see the CLI Reference on GitHub.
Share your integration on GitHub or Discord.
Next Steps [#next-steps]
* **[Troubleshooting](/docs/troubleshooting)**: Common issues and solutions
* **[Background Intelligence](/docs/advanced-features)**: Knowledge graph, insights, and daily briefings
# OpenClaw × Nowledge Mem (/docs/integrations/openclaw)
import { Step, Steps } from 'fumadocs-ui/components/steps';
Once configured, your OpenClaw agent remembers what you said in the last session, the decision you made last week, and the knowledge you wrote into a document three months ago.
Before You Start [#before-you-start]
You need:
* Nowledge Mem running locally ([installation](/docs/installation))
* OpenClaw installed ([OpenClaw getting started](https://docs.openclaw.ai/start/openclaw))
* `nmem` CLI on your PATH
```bash
nmem status # should show Nowledge Mem is running
openclaw --version
```
Setup [#setup]
Install the plugin
```bash
openclaw plugins install @nowledge/openclaw-nowledge-mem
```
Enable the plugin in OpenClaw config
Open `~/.openclaw/openclaw.json` and add:
```json
{
"plugins": {
"slots": {
"memory": "openclaw-nowledge-mem"
},
"entries": {
"openclaw-nowledge-mem": {
"enabled": true,
"config": {
"autoRecall": true,
"autoCapture": false,
"maxRecallResults": 5
}
}
}
}
}
```
Restart OpenClaw and verify
```bash
openclaw nowledge-mem status
```
If Nowledge Mem is reachable, you're done.
Verify It Works (1 Minute) [#verify-it-works-1-minute]
In OpenClaw chat:
1. `/remember We chose PostgreSQL for task events`
2. `/recall PostgreSQL` — should find it immediately
3. `/new` — start a fresh session
4. Ask: `What database did we choose for task events?` — it remembers across sessions
5. Ask: `What was I working on this week?` — weekly activity view
6. Ask: `What was I doing on February 17?` — down to the exact day
7. `/forget PostgreSQL task events` — clean deletion
If all seven steps work, the memory system is fully running.
What You Can Do [#what-you-can-do]
**Remember anything, forever**
Tell the AI `/remember We decided against microservices — the team is too small`. Next week, in a different session, ask "what was that decision about microservices?" It finds it.
**Browse your work by date**
Ask "what was I doing last Tuesday?" and the AI lists everything you saved, documents you added, and insights generated that day. You can ask for a specific date — not just "the past N days."
**Trace a decision's history**
Ask the AI "how did this idea develop?" and it shows you: the original source documents that informed it, which related memories were synthesized into a higher-level insight, and how your understanding changed over time.
**Start every session already in context**
Every morning, the Knowledge Agent produces a daily briefing: what you're focused on, open questions, recent changes. Your agent reads it at the start of every session. You never repeat yourself.
**Save knowledge with structure, not just text**
When you ask the AI to remember something, it doesn't just store text — it records the type (decision, learning, preference, plan…), when it happened, and links it to related knowledge. Searching by type, by date, by topic all work because the structure is there.
**Slash commands**: `/remember`, `/recall`, `/forget`
How the Hooks Work [#how-the-hooks-work]
Both `autoRecall` and `autoCapture` run in the background via plugin lifecycle hooks — they are not AI decisions. The agent never calls a hidden "save" function. The plugin code fires at specific moments, regardless of what the agent is doing.
autoRecall — What happens at session start [#autorecall--what-happens-at-session-start]
Before the agent sees your message, the plugin silently:
1. Reads your **Working Memory** — the daily briefing the Knowledge Agent generates each morning (focus areas, open questions, recent changes)
2. Searches your knowledge graph for **memories relevant to your current prompt**
3. Prepends both as invisible context to the system prompt, along with guidance on which Nowledge Mem tools are available
The agent starts each session already aware of your context. You don't ask for it. It just works.
autoCapture — What happens at session end [#autocapture--what-happens-at-session-end]
By default, the agent only saves when you ask it to (`autoCapture: false`). Turn it on to capture automatically:
```json
"autoCapture": true
```
At the end of each session (and at context compaction and reset), **two independent things happen**:
**1. The full conversation is saved as a thread.** Every message — yours and the agent's — is appended to a persistent thread in Nowledge Mem, keyed to this session. This happens unconditionally on every successful session end, no matter what was said. You can browse threads chronologically with `nowledge_mem_timeline`, or search them from any tool.
**2. A memory note may be extracted.** If your last message contains a decision, preference, or stated fact — for example "I prefer TypeScript" or "we decided against microservices" — a separate structured memory is also created. Questions, short messages, and slash commands are skipped. The memory note is independent of the thread: both can happen, one, or neither.
**Context compaction** is when OpenClaw compresses a long conversation to fit the model's context window. The plugin captures the transcript at that moment too — messages that get compressed away still end up in your knowledge base.
Messages are deduplicated — if the plugin fires at both session end and reset, you won't get duplicate entries.
Use Across Multiple Machines [#use-across-multiple-machines]
If OpenClaw runs on a different machine than Nowledge Mem, add your server address to the plugin config:
```json
"apiUrl": "https://your-nowledge-mem-url",
"apiKey": "nmem_..."
```
Or via environment variables:
```bash
export NMEM_API_URL="https://your-nowledge-mem-url"
export NMEM_API_KEY="nmem_..."
```
The API key is passed only through the process environment — it never appears in logs or command history. See [Access Mem Anywhere](/docs/remote-access).
Troubleshooting [#troubleshooting]
**Plugin is installed but OpenClaw isn't using it**
Check that `plugins.slots.memory` is exactly `openclaw-nowledge-mem`, and that you restarted OpenClaw after editing the config.
**"Duplicate plugin id detected" warning**
This happens if you previously installed the plugin locally (e.g. with `--link`) and then installed from npm. OpenClaw is loading it from both places. Fix it by removing the local path from your config:
Open `~/.openclaw/openclaw.json` and delete the `plugins.load.paths` entry that points to the local plugin directory:
```json
"load": {
"paths": []
}
```
Then restart OpenClaw. The warning will be gone and only the npm-installed version will load.
**Status shows not responding**
```bash
nmem status
curl -sS http://127.0.0.1:14242/health
```
**Search returns too few results**
Raise `maxRecallResults` to `8` or `12`.
Why Nowledge Mem? [#why-nowledge-mem]
Other memory tools store what you said as text and retrieve it by semantic similarity. Nowledge Mem is different.
**Knowledge has structure.** Every memory knows what type it is — decision, learning, plan, preference — when it happened, which source documents it came from, and how it relates to other memories. That's what makes search precise and reasoning reliable.
**Knowledge evolves.** The understanding you wrote today connects to the updated version you saved three months later. You can see how your thinking changed, without losing the intermediate steps.
**Knowledge has provenance.** Every piece of knowledge extracted from a PDF, document, or web page links back to its source. When the AI says "based on your March design doc," you can verify it.
**Knowledge travels across tools.** What you learned in Cursor, saved in Claude, refined in ChatGPT — all available in OpenClaw. Your knowledge belongs to you, not to any one tool.
**Local first, no cloud required.** Your knowledge lives on your machine. Remote access is available when you need it, not imposed by default.
How search ranking works: [Search & Relevance](/docs/search-relevance).
For Advanced Users [#for-advanced-users]
OpenClaw's `MEMORY.md` workspace file still works for workspace context. Memory tool calls are handled by Nowledge Mem, but both can coexist.
The plugin communicates with Nowledge Mem through the `nmem` CLI. Local and remote modes behave identically — configure the address once and every tool call routes correctly.
References [#references]
* Plugin source: [nowledge-mem-openclaw-plugin](https://github.com/nowledge-co/community/tree/main/nowledge-mem-openclaw-plugin)
* OpenClaw docs: [Plugin system](https://docs.openclaw.ai/tools/plugin)
* Changelog: [CHANGELOG.md](https://github.com/nowledge-co/community/blob/main/nowledge-mem-openclaw-plugin/CHANGELOG.md)
# Search Through Time (/docs/use-cases/bi-temporal)
import VideoPlayer from "@/components/ui/video-player"
import { Step, Steps } from 'fumadocs-ui/components/steps';
The Problem [#the-problem]
The board asks: *"Why did you choose React Native over Flutter in Q1?"*
You remember the decision. But you remember it through the lens of everything that happened after: the pivot, the performance issues, the rewrite.
You need to answer: **What did you know THEN?**
> "I can search my notes for 'React Native'. But I can't search for 'what I believed in March about React Native'."
The Solution [#the-solution]
Nowledge Mem uses **bi-temporal search**: two dimensions of time that let you find exactly what you're looking for.
**Event Time**: When did the thing actually happen?
**Record Time**: When did you capture it?
Search either. Search both. Travel through your own history.
Blog: [How We Taught Nowledge Mem to Forget](https://nowledge-labs.ai/blog/memory-decay-temporal).
Documentation about [Search & Relevance](/docs/search-relevance).
How It Works [#how-it-works]
Natural Language Queries [#natural-language-queries]
Just search naturally. Nowledge Mem understands temporal intent:
> "What did I decide about React Native in Q1 2024?"
The system:
1. Detects temporal intent: "Q1 2024"
2. Searches memories where the **event** occurred in that period
3. Returns results with original context
No special syntax needed.
Explicit Temporal Filters [#explicit-temporal-filters]
For precise control, use the advanced search:
| Filter | Meaning | Example |
| -------------------- | --------------------- | ---------- |
| **Event Date From** | Event happened after | 2024-01-01 |
| **Event Date To** | Event happened before | 2024-03-31 |
| **Record Date From** | Written down after | 2024-01-01 |
| **Record Date To** | Written down before | 2024-12-31 |
**Power Query Example:**
> Event Time: March 2024
> Record Time: Any
Returns: *"All memories about events from March 2024, regardless of when you recorded them."*
Flexible Date Precision [#flexible-date-precision]
Nowledge Mem handles flexible dates:
* **Year**: "2024" -> Matches anything in 2024
* **Month**: "2024-03" -> Matches March 2024
* **Day**: "2024-03-15" -> Matches that specific day
The system preserves your original precision and displays accordingly.
Knowledge Evolution [#knowledge-evolution]
Bi-temporal search gets even more powerful with Knowledge Evolution. Background Intelligence automatically detects when your thinking on a topic changes:
**Tuesday**: You save "Using PostgreSQL for the new service."
**Thursday**: You mention CockroachDB as a migration target.
**Friday**: Background Intelligence links them with an EVOLVES relationship and flags the tension.
Now when you search "database decisions," you don't just get isolated memories. You get the **evolution chain**: the original decision, the update, and the relationship between them. You can see exactly how your thinking shifted and when.
Evolution types:
* **Replaces**: Newer information makes older obsolete
* **Enriches**: Newer adds detail to older
* **Confirms**: Same conclusion from a different source
* **Challenges**: Contradictory information flagged for review
Real Examples [#real-examples]
Board Retrospective [#board-retrospective]
> **Query**: "architecture decisions in Q1 2024"
>
> **Result**: Original decision memos with Q1 context, plus evolution chains showing how decisions changed after
Compliance Audit [#compliance-audit]
> **Query**: "security policies before the incident"
>
> **Result**: What policies existed before the breach, with record timestamps proving when they were documented
Project Post-Mortem [#project-post-mortem]
> **Query**: "project-x assumptions from kickoff"
>
> **Result**: Original assumptions that turned out wrong, linked to the later insights that proved them wrong
Knowledge Graph + Time [#knowledge-graph--time]
Your graph view has a **timeline slider** that filters nodes and edges by date range.
Set the range to "March 2024" and see:
* Only entities that existed then
* Only connections that were known then
* The state of your knowledge at that moment
Drag the slider forward and watch your understanding evolve. Play the animation to see knowledge accumulate over time.
How Memory Decay Works [#how-memory-decay-works]
Not all memories age equally. Like your brain, Nowledge Mem:
* **Prioritizes recent memories** by default (30-day half-life)
* **Boosts frequently accessed** memories (logarithmic scaling)
* **Respects importance** scores you set (importance floor prevents full decay)
* **Learns from your behavior** (clicks, dwell time)
This means casual searches surface fresh, relevant results, but temporal searches bypass decay to find exactly what you asked for.
Temporal intent detection requires **Deep Mode** search. In Fast Mode, temporal references are matched by keywords only. Enable Deep Mode for queries like "recently working on" or "decisions from last quarter."
See [Search & Relevance](/docs/search-relevance) for the full technical breakdown of how scoring, decay, and temporal matching work.
The Two Times [#the-two-times]
Understanding the difference is key:
| Question | Which Time? |
| ------------------------------------ | ----------- |
| "What did I decide in March?" | Event Time |
| "What did I write last week?" | Record Time |
| "Show recent notes about old events" | Both |
| "What did I know before the pivot?" | Event Time |
Most searches use **event time** because you're asking about when things happened.
**Record time** is useful for:
* Finding recent captures
* Reviewing what you've been documenting
* Auditing when knowledge was recorded
Why This Matters [#why-this-matters]
Traditional search finds content. Temporal search finds **context**. Knowledge Evolution finds **the story**.
> "We didn't make a bad decision. We made the best decision with what we knew. Here's the proof. And here's exactly when and why our thinking changed."
Your memories are time-stamped, version-controlled, and historically accurate.
Next Steps [#next-steps]
* [Own Your Knowledge](/docs/use-cases/shared-memory) -> Use any tool without losing context
* [See Your Expertise](/docs/use-cases/expertise-graph) -> Visualize your knowledge
* [Background Intelligence](/docs/advanced-features) -> Knowledge graph capabilities
# See Your Expertise (/docs/use-cases/expertise-graph)
import VideoPlayer from "@/components/ui/video-player"
import { Step, Steps } from 'fumadocs-ui/components/steps';
The Problem [#the-problem]
You've been learning for years. Building expertise. Accumulating knowledge.
But can you see it?
> I know I'm good at... stuff. Technical stuff. But if someone asked me to describe my expertise, I'd struggle. It's all intuition. Nothing concrete.
Your knowledge is invisible. Scattered across memories, notes, conversations. You can't see the patterns. The connections. The clusters of expertise.
The Solution [#the-solution]
Nowledge Mem visualizes your knowledge as a **living graph**. Nodes are your memories and entities. Edges are relationships. And the graph **builds itself**: Background Intelligence automatically extracts entities and relationships from your memories overnight.
Run **community detection** and watch your expertise clusters emerge:
How It Works [#how-it-works]
The Graph Builds Itself [#the-graph-builds-itself]
You don't need to manually tag or categorize anything. Background Intelligence reads your memories and extracts:
* **Entities**: Technologies, people, concepts, projects
* **Relationships**: How they connect to each other
* **Evolution chains**: How your thinking on a topic has changed
This happens automatically. Save memories through any channel (auto-sync, browser extension, Timeline, `/sum`) and the graph grows on its own.
Automatic entity extraction requires a [Pro license](/docs/mem-pro) and a configured Remote LLM.
Run Community Detection [#run-community-detection]
In the right panel, find **Graph Algo** and click Compute under **Clustering**.
The Louvain algorithm analyzes your knowledge structure and finds natural clusters:
| Community | Size | Theme |
| ------------------- | ----------- | ----------------------------- |
| Distributed Systems | 87 memories | Backend architecture, scaling |
| Team Leadership | 45 memories | Mentoring, communication |
| Performance | 62 memories | Optimization, profiling |
| Side Projects | 23 memories | Creative experiments |
Each cluster gets a colored "bubble" around its nodes.
Travel Through Time [#travel-through-time]
The **timeline slider** at the bottom of the graph lets you filter by date range.
Drag to "January 2024" and see your knowledge at that point. Drag forward and watch new clusters form, existing ones grow, and connections multiply.
Play the animation to watch your expertise evolve over months. See when a new interest emerged, when it connected to existing knowledge, and when it grew into a full cluster.
Explore and Discover [#explore-and-discover]
Navigate the graph:
* **Click** any node to see its details
* **Double-click** to expand neighbors
* **Shift+drag** to lasso-select multiple nodes
* **Press C** to toggle community bubbles
* **Press E** to expand selected node's neighbors
Find patterns you never noticed:
> Every leadership memory links back to debugging sessions. I lead by teaching debugging.
What You'll Discover [#what-youll-discover]
Expertise Clusters [#expertise-clusters]
Community detection reveals where your knowledge naturally groups:
* **Core strengths**: Large, dense clusters
* **Emerging areas**: Small but growing clusters
* **Bridges**: Nodes that connect multiple clusters (often your most unique skills)
Knowledge Evolution [#knowledge-evolution]
Background Intelligence tracks how your thinking changes:
* **Tuesday**: "Using PostgreSQL for the new service"
* **Thursday**: "Considering CockroachDB for migration"
* **Friday briefing**: "Your database thinking is evolving"
These evolution chains appear as linked nodes in the graph. You can see exactly where your opinions shifted and follow the trail.
Hidden Patterns [#hidden-patterns]
Explore and find:
* Recurring themes you never consciously tracked
* Connections between seemingly unrelated projects
* Your unique perspective and approach
* Gaps between related topics
Asking AI About Your Graph [#asking-ai-about-your-graph]
With your graph in view, ask AI Now to interpret it:
> Based on my knowledge graph, what career paths fit me best?
AI Now synthesizes:
> Your memories show a unique intersection of deep systems knowledge with teaching ability. Your most central concepts (event-driven architecture, debugging) connect to both technical and leadership clusters. Consider: Staff Engineer, Developer Advocate, or Engineering Manager with technical focus.
Other questions to try:
* "What are my strongest expertise areas?"
* "Where are the gaps in my knowledge?"
* "What topics should I explore next?"
* "How has my focus shifted over time?"
The Compound Effect [#the-compound-effect]
More memories = richer graph = deeper insights.
**After 1 month:**
> I can see my main topics, but clusters are small
**After 6 months:**
> Clear expertise areas. Unexpected connections emerging. Background Intelligence is finding patterns I missed.
**After 1 year:**
> I can literally see how my thinking has evolved. The connections I made last year laid groundwork for this year.
**For performance reviews:**
> I explored my graph before the review. Had concrete examples of growth across every dimension.
Next Steps [#next-steps]
* [Background Intelligence](/docs/advanced-features) -> How the graph grows automatically
* [Own Your Knowledge](/docs/use-cases/shared-memory) -> Use any tool without losing context
* [Search Through Time](/docs/use-cases/bi-temporal) -> Temporal queries and evolution chains
# Overview (/docs/use-cases)
import { Cards, Card } from 'fumadocs-ui/components/card';
import { Brain, Clock, FileText, Network, MessageSquare, Sparkles } from 'lucide-react';
Nowledge Mem learns from everything you do with AI. It auto-captures conversations, syncs sessions in real time, and builds a knowledge graph that grows overnight. Every connected tool starts with your full context.
} href="/docs/use-cases/shared-memory" title="Own Your Knowledge">
Tell Claude once. Cursor already knows. One knowledge base across every AI tool you use.
} href="/docs/use-cases/session-backup" title="Never Lose a Session">
Sessions auto-sync in real time. Claude Code, Cursor, Codex, ChatGPT -- every conversation captured.
} href="/docs/use-cases/bi-temporal" title="Search Through Time">
The board asks why you chose React Native. Find what you believed then, not what you know now.
} href="/docs/use-cases/notes-everywhere" title="Your Notes, Everywhere">
Obsidian, Notion, PDFs, Word docs. One search covers all your knowledge sources.
} href="/docs/use-cases/expertise-graph" title="See Your Expertise">
The graph builds itself from your memories. Community detection reveals expertise clusters you didn't know you had.
} href="/docs/ai-now" title="AI Now">
A personal AI agent with your full knowledge. Deep research, file analysis, presentations — purpose-built capabilities on your machine.
Three Things That Change [#three-things-that-change]
**It captures automatically.** The browser extension grabs insights from ChatGPT, Claude, Gemini, and 13+ platforms. Sessions from Claude Code, Cursor, and Codex sync in real time. You stop copying and pasting between tools.
**It learns while you sleep.** Background Intelligence detects when your thinking evolves, synthesizes reference articles from scattered memories, and flags contradictions. Your morning briefing at `~/ai-now/memory.md` tells your AI tools what you're working on before you say anything.
**It goes where you go.** One command connects 20+ AI agents. Switch tools freely. Your knowledge stays.
How It Works [#how-it-works]
1. **Capture** -- browser extension, session sync, or type it into the Timeline
2. **Connect** -- the system links it to everything you already know
3. **Grow** -- Background Intelligence builds evolution chains, crystals, and flags overnight
4. **Use** -- any connected tool finds it when it's relevant
Your knowledge compounds in Mem, independent of any single tool.
Ready to Start? [#ready-to-start]
Pick a use case above, or go straight to [Getting Started](/docs/getting-started) to set up Nowledge Mem.
# Your Notes, Everywhere (/docs/use-cases/notes-everywhere)
import VideoPlayer from "@/components/ui/video-player"
import { Step, Steps } from 'fumadocs-ui/components/steps';
The Problem [#the-problem]
You've been taking notes for years. Obsidian. Notion. Maybe both.
Thousands of entries. Carefully tagged. Extensively linked.
And yet...
> I know I wrote about this. I just can't find it. The search is useless. The tags don't help.
Worse: Your AI assistant has no idea any of this exists. You're explaining context that's already in your notes. Every. Single. Time.
The Solution [#the-solution]
We don't replace your note app. We **wire it into your knowledge**.
Keep using Obsidian, Notion, Apple Notes or folders of Markdown files exactly as you do now. Nowledge Mem connects to them, making your notes searchable alongside your memories, by AI Now, and by any AI tool via MCP.
And with the **Library**, you can drop PDFs, Word documents, and presentations in too. Everything becomes searchable from one place.
How It Works [#how-it-works]
Connect Your Notes [#connect-your-notes]
**Obsidian:**
1. Open AI Now in Nowledge Mem
2. Go to **Plugins** -> Enable **Obsidian Vault**
3. Set your vault path (e.g., `/Users/you/Documents/ObsidianVault`)
4. Done. AI Now can now search your vault
**Notion:**
1. Open AI Now -> **Plugins** -> Enable **Notion**
2. Click **Connect with Notion**
3. Authorize access in the browser popup
4. Your workspace is now accessible
Import Documents to the Library [#import-documents-to-the-library]
Drop files directly into the Timeline input or open the Library view:
| Format | Extensions | What Happens |
| ----------------- | ----------- | -------------------------------------------- |
| **PDF** | .pdf | Text extracted, split into segments, indexed |
| **Word** | .docx, .doc | Parsed to text, segmented, indexed |
| **Presentations** | .pptx | Slide content extracted and indexed |
| **Markdown** | .md | Parsed and indexed directly |
Once indexed, document content is searchable alongside your memories and notes.
Search Across Everything [#search-across-everything]
Ask AI Now any question:
> What do my notes say about quantum computing?
AI Now:
1. Searches your Obsidian vault
2. Searches your Notion workspace
3. Searches your Nowledge memories
4. Searches your Library documents
5. Combines and synthesizes results
One question. All your knowledge sources.
Distill Into Memories [#distill-into-memories]
Found valuable notes? Turn them into permanent memories:
> Distill the key insights from these quantum computing notes
AI Now creates:
* **Insight**: "Quantum error correction requires O(n^2) qubits"
* **Decision**: "Focus on NISQ algorithms for near-term research"
* **Fact**: "IBM claimed quantum advantage Dec 2023"
These memories are now:
* Searchable with semantic understanding
* Connected in the knowledge graph
* Accessible to ALL your AI tools via MCP
* Part of your Working Memory briefing when relevant
Obsidian Integration [#obsidian-integration]
Setup [#setup]
Open Nowledge Mem
Click the AI Now tab
Go to **Plugins** in the sidebar
Find **Obsidian Vault** and toggle it on
Enter your vault path
Example: `/Users/yourname/Documents/ObsidianVault`
What You Can Do [#what-you-can-do]
Once connected:
* Search notes by content: *"Find my notes about machine learning"*
* Read specific notes: *"Show me the note about project kickoff"*
* Reference in context: *"Based on my Obsidian notes about X, help me..."*
Your vault is read locally. Notes are never uploaded anywhere. Nowledge Mem just reads the files on your machine.
Notion Integration [#notion-integration]
Setup [#setup-1]
Open AI Now -> **Plugins**
Find **Notion** and click **Connect**
Authorize in the browser popup
Select the workspaces you want to connect
What You Can Do [#what-you-can-do-1]
* Search your workspace: *"Find pages about quarterly planning"*
* Read page content: *"What's in my Product Roadmap page?"*
* Cross-reference: *"Compare my Notion notes with my memories about X"*
* Deep Research with both public information and private knowledge: *"What's the latest on quantum computing?"*
Notion uses secure OAuth. You control exactly which pages Nowledge Mem can access. Revoke anytime from Notion settings.
Built-in Integrations [#built-in-integrations]
Some tools have Nowledge Mem built in:
* **DeepChat**: Toggle Nowledge Mem in settings. Your memories become available in every chat.
* **LobeHub**: Install from the marketplace. Full MCP integration.
Coming Soon [#coming-soon]
* **Apple Notes** integration
Join the [Community](/docs/community) to request integrations.
Next Steps [#next-steps]
* [AI Now](/docs/ai-now) -> Learn what else AI Now can do
* [Library](/docs/library) -> Import and search documents
* [See Your Expertise](/docs/use-cases/expertise-graph) -> Visualize your knowledge graph
* [Integrations](/docs/integrations) -> Full setup guides
# Never Lose a Session (/docs/use-cases/session-backup)
import VideoPlayer from "@/components/ui/video-player"
import { Step, Steps } from 'fumadocs-ui/components/steps';
The Problem [#the-problem]
You just had an epic debugging session. Three hours with Claude Code. You found a race condition, traced it through 15 files, built a bulletproof fix with tests.
But AI conversations are ephemeral. Context gets compacted, token limits hit, and sessions expire. That 200-message thread? The early context is already gone.
> "I solved this exact problem before. I just can't remember how. Or where. Or when."
The Solution [#the-solution]
Your sessions sync automatically. Claude Code, Cursor, Codex, and OpenCode conversations are captured in real time. Browser conversations from ChatGPT, Claude, and Gemini are grabbed by the extension. No commands to remember. No manual exports.
When you're ready, distill a thread into permanent, searchable, graph-connected memories.
How It Works [#how-it-works]
Sessions Sync Automatically [#sessions-sync-automatically]
**Claude Code and Codex (npx skills):**
Install once:
```bash
npx skills add nowledge-co/community/nowledge-mem-npx-skills
```
Sessions are saved automatically. The agent distills key insights at session end.
**Cursor and OpenCode (Auto-Sync):**
Nowledge Mem watches for new conversations in real time. Open **Threads** to see them appear as you work. No import step needed.
**Browser (ChatGPT, Gemini, Claude Web):**
The Exchange v2 extension captures conversations from 13+ AI chat platforms. Insights flow into Mem as you chat.
**Manual save (any MCP tool):**
```
/save -> Checkpoint the full thread
/sum -> Distill insights into memories
```
Distill Into Permanent Knowledge [#distill-into-permanent-knowledge]
Open a saved thread and click **Distill**. The AI reads the entire conversation and extracts:
* **Decisions**: "Chose sliding window over token bucket because..."
* **Insights**: "Race conditions in async callbacks need mutex locks"
* **Patterns**: "Testing time-based bugs requires mock clocks"
* **Facts**: "Redis SETNX provides atomic lock acquisition"
Each becomes a standalone, searchable memory with proper labels.
Background Intelligence Connects It [#background-intelligence-connects-it]
Your new memories don't sit in isolation. Background Intelligence:
* Links them to previous work on the same codebase
* Detects if they update or contradict earlier decisions
* Connects them to related entities in the knowledge graph
* Surfaces them in your next morning's Working Memory briefing
Three months later, a colleague hits the same bug. Your briefing mentions it before they even ask.
Search Anytime [#search-anytime]
Three months later, similar bug appears:
> Search: "payment race condition"
Nowledge Mem returns the full context: the problem, the debugging steps, the solution, the test approach.
No more re-solving solved problems.
What Gets Captured [#what-gets-captured]
| Source | How | What You Get |
| --------------- | -------------------------------- | ------------------------------ |
| **Claude Code** | npx skills (auto) or `/save` | Full session with code context |
| **Codex** | npx skills (auto) or `/save` | Full session with code context |
| **Cursor** | Auto-sync (real-time watching) | Conversations as they happen |
| **OpenCode** | Auto-sync (real-time watching) | Conversations as they happen |
| **ChatGPT** | Browser extension (auto-capture) | Insights from web chats |
| **Claude Web** | Browser extension (auto-capture) | Insights from web chats |
| **Gemini** | Browser extension (auto-capture) | Insights from web chats |
| **13+ more** | Browser extension | Any supported AI chat platform |
What Gets Extracted [#what-gets-extracted]
When you distill a thread, the AI creates memories categorized by type:
| Type | Example | Labels |
| -------------- | --------------------------------------- | ---------------------- |
| **Decision** | "Used Redis for distributed locking" | decision, architecture |
| **Insight** | "Async callbacks need careful ordering" | insight, debugging |
| **Procedure** | "Steps to reproduce race conditions" | procedure, testing |
| **Fact** | "SETNX returns 1 if key was set" | fact, redis |
| **Experience** | "Debugging session on payment service" | experience, project |
The Compound Effect [#the-compound-effect]
One thread saved is useful. Ten threads saved is a knowledge base. A hundred threads? That's institutional memory.
> "Junior dev hit the same bug today. Sent them my memory. They fixed it in 20 minutes instead of 3 hours."
Your debugging sessions aren't just conversations. They're training data for your future self.
Pro Tips [#pro-tips]
You don't need to distill every thread. Save important sessions: the breakthroughs, the architectural decisions, the hard-won solutions.
For sensitive codebases, review what you're saving. Threads might contain proprietary code or credentials.
Next Steps [#next-steps]
* [Own Your Knowledge](/docs/use-cases/shared-memory) -> Use any tool without losing context
* [Search Through Time](/docs/use-cases/bi-temporal) -> Find memories from specific time periods
* [Integrations](/docs/integrations) -> Setup guides for each tool
# Own Your Knowledge (/docs/use-cases/shared-memory)
import VideoPlayer from "@/components/ui/video-player"
import { Step, Steps } from 'fumadocs-ui/components/steps';
The Problem [#the-problem]
You told Claude Code about your project architecture last week. Today, you're explaining it again to Cursor. Tomorrow, you'll try the new tool everyone's talking about, and start from scratch.
This isn't a memory problem. It's a lock-in problem. Your knowledge is trapped inside whichever tool you used last.
> "I already explained this. Why do I have to start over in a different tool?"
The Solution [#the-solution]
Nowledge Mem is a knowledge layer that sits between you and every AI tool you use. It captures your insights automatically, syncs your sessions in real time, and writes a daily briefing so every tool starts with your full context.
One command to connect. Zero workflow changes.
How It Works [#how-it-works]
Connect in One Command [#connect-in-one-command]
```bash
npx skills add nowledge-co/community/nowledge-mem-npx-skills
```
Works with Claude Code, Cursor, Codex, OpenCode, OpenClaw, Alma, and 20+ other agents. Installs four skills: Working Memory briefing, knowledge search, session saving, and insight capture.
After setup, your agent reads your morning briefing at session start, searches your knowledge mid-task, and saves what it learns.
Capture Happens Automatically [#capture-happens-automatically]
You don't need to remember to save. Mem captures from multiple channels:
**Browser Extension (Exchange v2):**
The extension monitors your AI chats on ChatGPT, Claude, Gemini, and 13+ platforms. Insights are captured automatically as you work.
**Session Auto-Sync:**
Claude Code, Cursor, Codex, and OpenCode sessions sync in real time. A 3-hour debugging session is preserved without you typing a command.
**Timeline Input:**
Type a thought, paste a URL, drop a file. For the times you want to save something specific.
**Manual Commands:**
```
/sum -> Summarize this conversation into memories
/save -> Checkpoint the entire thread
```
Every Tool Starts Informed [#every-tool-starts-informed]
Each morning, Background Intelligence writes a briefing to `~/ai-now/memory.md`. Every connected AI tool reads it at session start.
Your agent already knows:
* What you're working on
* Decisions you made recently
* Open questions and contradictions
* How your thinking has evolved
No re-explanation needed. Open Claude Code at 9 AM and it picks up where you left off.
Switch Tools Freely [#switch-tools-freely]
New tool? Connect it to Mem. It immediately has your full context.
**Example:**
You saved: *"Architecture decision: Using Redis for session management because..."*
Later, in Cursor: *"Help me add session handling"*
Cursor searches your knowledge, finds the Redis decision, applies the same pattern. No re-explanation needed.
Real Example [#real-example]
**Without Nowledge Mem:**
> You: "Help me implement rate limiting"
>
> Claude: "What kind? Token bucket? Sliding window? What's your use case?"
>
> You: *\[Explains for the 5th time this month]*
**With Nowledge Mem:**
> You: "Help me implement rate limiting"
>
> Claude: *\[Reads your Working Memory briefing, searches your memories]* "Based on your decision last month to use sliding window rate limiting for the payment service, here's an implementation matching your Redis patterns..."
What Gets Connected [#what-gets-connected]
| Channel | How It Works | What Gets Captured |
| --------------------- | -------------------------- | ---------------------------------------------------- |
| **npx skills** | One command, 20+ agents | Working Memory, search, save, distill |
| **Browser Extension** | Auto-capture from AI chats | Insights from ChatGPT, Claude, Gemini, 13+ platforms |
| **Session Auto-Sync** | Real-time watching | Claude Code, Cursor, Codex, OpenCode sessions |
| **MCP** | Direct protocol connection | Any MCP-compatible tool |
| **Claude Desktop** | One-click extension | Full integration |
| **Built-in** | Toggle in settings | DeepChat, LobeHub |
The Compound Effect [#the-compound-effect]
A few weeks in, any new tool you connect already knows how you work. Your preferences persist across tools. Your decisions compound. Every insight you've ever saved is available to every tool you'll ever use.
The value lives in Mem, not in any single tool.
Next Steps [#next-steps]
* [Never Lose a Session](/docs/use-cases/session-backup) -> Auto-sync and backup AI conversations
* [Search Through Time](/docs/use-cases/bi-temporal) -> Find what you knew when
* [Integrations](/docs/integrations) -> Connect all your tools
# 后台智能 (/docs/zh/advanced-features)
import VideoPlayer from "@/components/ui/video-player"
import { Step, Steps } from 'fumadocs-ui/components/steps';
一月,你保存了一个使用 PostgreSQL 的决策。七月,你记录了正在迁移到 CockroachDB。你从未把两者联系起来。Nowledge Mem 做到了。它将它们关联,追踪演变,下次你搜索任何一个,两者都会出现,带着你的思维如何变化的完整故事。
这发生在你睡觉时。你打开应用,连接已经在那里了。
AI Now [#ai-now]
AI Now 是运行在你本地的个人 AI 智能体。它拥有你的完整知识库、连接的笔记和网络。精心打造的能力——不只是聊天:
* **深度研究**:同时搜索你的记忆和网络,综合分析
* **文件分析**:在上下文中理解你的数据——"和上季度比有什么变化"之所以能回答,是因为它知道上季度
* **演示文稿**:实时预览,导出 PowerPoint
* **插件**:Obsidian、Notion、Apple Notes 和任何 MCP 服务
当你问缓存方案时,它已经知道你上个月的 Redis 决策。当你分析数据时,它把数字和你的目标、历史关联起来。每一项能力都建立在你的知识之上。
AI Now 需要远程 LLM。详见 [AI Now](/zh/docs/ai-now) 完整指南。
命令行 [#命令行]
`nmem` CLI 让你从任何终端获得完整访问:
```bash
# 搜索你的记忆
nmem m search "authentication patterns"
# 添加记忆
nmem m add "We chose JWT with 24h expiry for the auth service"
# JSON 输出用于脚本
nmem --json m search "API design" | jq '.memories[0].content'
```
详见 [CLI 参考](/zh/docs/cli)获取完整命令集。
远程 LLM [#远程-llm]
默认在本地运行,不需要联网。知识库增长后,远程 LLM 能给你更强的处理能力。
远程 LLM 配置需要 [Pro 许可证](/zh/docs/mem-pro)。
**解锁的功能:**
* **后台智能**:自动发现关联、生成 Crystal、产出 Insight 以及每日简报
* 更快的知识图谱提取
* 更细腻的语义理解
* AI Now Agent 能力
**隐私:** 你的数据仅发送到你选择的 LLM 提供商,永远不会发送到 Nowledge Mem 服务器。你可以随时切换回纯本地模式。
前往 **Settings > Remote LLM**
开启 **Remote** 模式
选择你的 LLM 提供商并输入 API 密钥
测试连接,选择模型,保存
下一步 [#下一步]
* **[AI Now](/zh/docs/ai-now)**: 基于你的知识进行深度研究和分析
* **[后台智能](/zh/docs/advanced-features)**: 你的知识如何自动成长:知识图谱、Insight、Crystal、Working Memory
* **[集成](/zh/docs/integrations)**: 连接你所有的 AI 工具
# List Communities (/docs/api/communities/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
List knowledge communities with AI summaries.
# List Entities (/docs/api/entities/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
List entities with optional filtering.
# Health Check (/docs/api/health/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Health check endpoint.
# List Labels (/docs/api/labels/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
List all labels with usage counts.
# Create Label (/docs/api/labels/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Create a new label.
# List Memories (/docs/api/memories/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
List memories with filtering and pagination.
# Create Memory (/docs/api/memories/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Create a new memory with automatic entity extraction.
# List Sources (/docs/api/sources/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
List sources with optional filtering and pagination.
# List Threads (/docs/api/threads/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
List threads with filtering and pagination.
# Create Thread (/docs/api/threads/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Create a new thread with messages.
# OpenClaw × Nowledge Mem (/docs/zh/integrations/openclaw)
import { Step, Steps } from 'fumadocs-ui/components/steps';
配置完成后,你的 OpenClaw 会记住你在上一次会话说的话,记住你上周做的决定,记住你三个月前写入文档的知识。
开始之前 [#开始之前]
需要准备:
* 本地已运行 Nowledge Mem([安装](/zh/docs/installation))
* 已安装 OpenClaw([OpenClaw 入门](https://docs.openclaw.ai/start/openclaw))
* `nmem` CLI 在你的 PATH 中
```bash
nmem status # 应显示 Nowledge Mem 正在运行
openclaw --version
```
配置步骤 [#配置步骤]
安装插件
```bash
openclaw plugins install @nowledge/openclaw-nowledge-mem
```
在 OpenClaw 配置中启用插件
打开 `~/.openclaw/openclaw.json`,加入以下内容:
```json
{
"plugins": {
"slots": {
"memory": "openclaw-nowledge-mem"
},
"entries": {
"openclaw-nowledge-mem": {
"enabled": true,
"config": {
"autoRecall": true,
"autoCapture": false,
"maxRecallResults": 5
}
}
}
}
}
```
重启 OpenClaw,验证生效
```bash
openclaw nowledge-mem status
```
看到 Nowledge Mem 可访问即配置成功。
验证配置(1 分钟) [#验证配置1-分钟]
在 OpenClaw 聊天中依次执行:
1. `/remember 我们为任务事件选择了 PostgreSQL`
2. `/recall PostgreSQL` — 应立即找到
3. `/new` — 开启新会话
4. 问:`任务事件的数据库我们选的什么?` — 跨会话记住了
5. 问:`这周我都做了什么?` — 按周浏览
6. 问:`2月17日我在忙什么?` — 精确到某一天
7. `/forget PostgreSQL 任务事件` — 删除干净
如果以上七步都顺利,记忆系统已完整运作。
你能做什么 [#你能做什么]
**记住任何事情**
告诉 AI `/remember 我们决定不用微服务,原因是团队太小`,下周换一个会话,直接问"微服务那个决定是怎么说的",它能找到。
**按日期找回工作内容**
问"上周二我在做什么",AI 会列出那天你保存的内容、添加的文档、生成的洞察。支持指定具体日期,不只是"最近 N 天"。
**追溯一个决策的来龙去脉**
问 AI "这条记忆是怎么来的、和什么有关",它会展示:这条知识的原始来源文档、哪些相关记忆被合成为了更高层的洞察、这个认识随时间怎么变化过。
**每天自动带着上下文开始**
每天早上,Nowledge Mem 的知识智能体生成一份今日简报:你在关注什么、有什么未解决的问题、最近有什么新进展。会话开始时 AI 自动读取,不需要你每次重新介绍背景。
**保存时带上类型和时间**
你不只是在保存文字,你在记录结构化的知识。告诉 AI "记住这是一个决策,发生在 2026 年 2 月",它会以正确的类型和时间存进知识图谱。支持 8 种类型:事实、偏好、决策、计划、流程、学习、背景、事件。
**斜杠命令快捷方式**:`/remember`、`/recall`、`/forget`
自动记忆的工作方式 [#自动记忆的工作方式]
`autoRecall` 和 `autoCapture` 都是通过插件生命周期钩子在后台运行的——它们不是 AI 做出的决定,AI 不会在后台调用某个隐藏的"保存"工具。插件代码在特定时刻触发,与 AI 的行为无关。
autoRecall — 会话开始时发生什么 [#autorecall--会话开始时发生什么]
在 AI 看到你的消息之前,插件会悄悄地:
1. 读取你的**工作记忆**(Knowledge Agent 每天早上生成的今日简报:你在关注什么、有什么未解决的问题、最近有什么新进展)
2. 根据你当前的消息,在知识图谱里**搜索相关记忆**
3. 将上述内容以隐式上下文的形式插入系统提示,同时附带 Nowledge Mem 工具的使用指引
AI 一开始就已经了解你的背景,不需要你每次重新介绍。
autoCapture — 会话结束时发生什么 [#autocapture--会话结束时发生什么]
默认情况下,AI 只在你主动要求时才保存(`autoCapture: false`)。开启自动保存:
```json
"autoCapture": true
```
每次会话结束时(以及上下文压缩和重置时),**会有两件独立的事情发生**:
**1. 完整对话会被保存为一个 Thread。** 你和 AI 的所有消息都会被追加到 Nowledge Mem 里一个与本次会话绑定的 Thread 中。这是无条件发生的——只要会话正常结束,不管说了什么都会保存。你可以用 `nowledge_mem_timeline` 按时间浏览这些对话,也可以从任何工具中搜索。
**2. 可能会提取一条记忆。** 如果你最后一条消息包含决策、偏好或陈述性事实——比如"我倾向于用 TypeScript"或"我们决定不用微服务"——插件还会额外创建一条结构化记忆。疑问句、过短的消息和斜杠命令会被跳过。这条记忆与 Thread 是独立的,两者可能都有,也可能只有其一,或者都没有。
**上下文压缩** 是指 OpenClaw 为了让对话适配模型上下文窗口而对长对话进行压缩的过程。插件会在压缩发生时捕获对话记录,被压缩掉的消息不会丢失,仍然会进入你的知识库。
消息会自动去重——就算插件在会话结束和重置时都触发了,Nowledge Mem 里也不会出现重复内容。
在多台机器上使用 [#在多台机器上使用]
如果你的 OpenClaw 运行在另一台机器或服务器上,在插件配置中填入 Nowledge Mem 的地址:
```json
"apiUrl": "https://your-nowledge-mem-url",
"apiKey": "nmem_..."
```
或者通过环境变量:
```bash
export NMEM_API_URL="https://your-nowledge-mem-url"
export NMEM_API_KEY="nmem_..."
```
API 密钥只在内部传递,不会出现在日志或命令行历史中。详见:[随处访问 Mem](/zh/docs/remote-access)。
遇到问题? [#遇到问题]
**插件装了,但 OpenClaw 好像没在用它**
检查 `plugins.slots.memory` 的值是否正好是 `openclaw-nowledge-mem`,确认修改配置后重启了 OpenClaw。
**status 显示无法连接**
```bash
nmem status
curl -sS http://127.0.0.1:14242/health
```
**搜索只找到一两条结果**
把 `maxRecallResults` 调高到 `8` 或 `12`。
为什么用 Nowledge Mem 而不是其他方案? [#为什么用-nowledge-mem-而不是其他方案]
其他记忆工具把你说过的话存成一段段文字,靠语义相似度找回来。Nowledge Mem 不一样。
**知识是有结构的。** 你保存的每条记忆都知道自己是什么类型——决策、学习、计划还是偏好——知道它什么时候发生、指向哪些来源文档、和哪些其他记忆有关联。这让搜索更准、推理更靠谱。
**知识会演化。** 你今天写的理解,和三个月后更新过的认识,在系统里是连在一起的。你可以看到自己的想法怎么变化的,不会丢掉中间的过程。
**知识来自哪里是透明的。** 从 PDF、文档或网页提取的每条知识,都保留着指向原始文件的链接。AI 告诉你"根据你三月份的设计文档",你能直接验证。
**跨工具共享。** 在 Cursor 里学到的,在 Claude 里记下的,在 OpenClaw 里一样能用。你的知识不属于任何一个工具,它属于你。
**本地优先,无需云账户。** 你的知识存在本地。远程访问是可选的,不是必须的。
搜索怎么工作的?参见[搜索与相关性](/zh/docs/search-relevance)。
给进阶用户 [#给进阶用户]
OpenClaw 的 `MEMORY.md` 工作区文件仍然有效,但记忆工具的实际调用全部由 Nowledge Mem 处理。两者可以共存。
插件通过 `nmem` CLI 子进程与 Nowledge Mem 通信。这意味着本地和远程模式的行为完全一致,配置好地址后不需要其他改动。
参考 [#参考]
* 插件源码:[nowledge-mem-openclaw-plugin](https://github.com/nowledge-co/community/tree/main/nowledge-mem-openclaw-plugin)
* OpenClaw 文档:[插件系统](https://docs.openclaw.ai/tools/plugin)
* 更新日志:[CHANGELOG.md](https://github.com/nowledge-co/community/blob/main/nowledge-mem-openclaw-plugin/CHANGELOG.md)
# 穿越时间搜索 (/docs/zh/use-cases/bi-temporal)
import VideoPlayer from "@/components/ui/video-player"
import { Step, Steps } from 'fumadocs-ui/components/steps';
问题所在 [#问题所在]
董事会问:*"为什么你在第一季度选择了 React Native 而不是 Flutter?"*
你记得那个决定。但你记得的是通过之后发生的一切的镜头:转型、性能问题、重写。
你需要回答:**你当时知道什么?**
> "我可以搜索我的笔记中的'React Native'。但我不能搜索'我在三月份对 React Native 的看法'。"
解决方案 [#解决方案]
Nowledge Mem 使用**双时态搜索**:两个时间维度让你准确找到你要找的东西。
**事件时间**:事情实际上是什么时候发生的?
**记录时间**:你什么时候捕获的?
可以单独搜索,也可以组合使用。
博客:[我们如何教会 Nowledge Mem 遗忘](https://nowledge-labs.ai/blog/memory-decay-temporal)。
关于[搜索与相关性](/zh/docs/search-relevance)的文档。
工作原理 [#工作原理]
自然语言查询 [#自然语言查询]
只需自然地搜索。Nowledge Mem 理解时间意图:
> "我在 2024 年第一季度对 React Native 做了什么决定?"
系统:
1. 检测时间意图:"2024 年第一季度"
2. 搜索**事件**发生在该期间的记忆
3. 返回带有原始上下文的结果
不需要特殊语法。
显式时间过滤器 [#显式时间过滤器]
对于精确控制,使用高级搜索:
| 过滤器 | 含义 | 示例 |
| --------- | -------- | ---------- |
| **事件日期从** | 事件发生在此之后 | 2024-01-01 |
| **事件日期到** | 事件发生在此之前 | 2024-03-31 |
| **记录日期从** | 写下在此之后 | 2024-01-01 |
| **记录日期到** | 写下在此之前 | 2024-12-31 |
**强大查询示例:**
> 事件时间:2024 年 3 月
> 记录时间:任何
返回:*"所有关于 2024 年 3 月事件的记忆,无论你什么时候记录的。"*
灵活的日期精度 [#灵活的日期精度]
Nowledge Mem 处理灵活的日期:
* **年**:"2024" -> 匹配 2024 年的任何内容
* **月**:"2024-03" -> 匹配 2024 年 3 月
* **日**:"2024-03-15" -> 匹配那个特定日期
系统保留你的原始精度并相应显示。
知识演化 [#知识演化]
双时态搜索与知识演化结合更加强大。后台智能自动检测你对某个话题的想法变化:
**周二**:你保存了"新服务用 PostgreSQL。"
**周四**:你提到 CockroachDB 作为迁移目标。
**周五**:后台智能用 EVOLVES 关系链接它们,标记出张力。
现在搜索"数据库决策",你不只是得到孤立的记忆。你得到**演化链**:原始决策、更新,以及它们之间的关系。你能准确看到你的思维何时、如何改变。
演化类型:
* **替换**:新信息使旧信息过时
* **丰富**:新信息为旧信息添加细节
* **确认**:来自不同来源的相同结论
* **挑战**:矛盾的信息,标记待审查
实际示例 [#实际示例]
董事会回顾 [#董事会回顾]
> **查询**:"2024 年第一季度的架构决定"
>
> **结果**:带有第一季度上下文的原始决策备忘录,加上展示决策如何变化的演化链
合规审计 [#合规审计]
> **查询**:"事故前的安全策略"
>
> **结果**:违规前存在什么策略,带有证明何时记录的时间戳
项目复盘 [#项目复盘]
> **查询**:"项目启动时的 project-x 假设"
>
> **结果**:后来被证明错误的原始假设,链接到证明它们错误的后续洞察
知识图谱 + 时间 [#知识图谱--时间]
图谱视图有一个**时间线滑块**,可以按日期范围过滤节点和边。
将范围设置为"2024 年 3 月"并查看:
* 只有当时存在的实体
* 只有当时已知的连接
* 你在那个时刻的知识状态
向前拖动滑块,观察你的理解如何演变。播放动画,看知识随时间累积。
记忆衰减如何工作 [#记忆衰减如何工作]
记忆衰减遵循以下规则:
* 默认**优先最近的记忆**(30 天半衰期)
* **提升经常访问的**记忆(对数缩放)
* **尊重重要性分数**(重要性底线防止完全衰减)
* **从行为中学习**(点击、停留时间)
普通搜索会浮现新鲜、相关的结果;时间搜索则绕过衰减,精确返回你指定的时段。
时间意图检测需要**深度模式**搜索。在快速模式下,时间引用仅按关键词匹配。对于"最近在做"或"上季度的决定"等查询,启用深度模式。
查看[搜索与相关性](/zh/docs/search-relevance)了解评分、衰减和时间匹配如何工作的完整技术分解。
两种时间 [#两种时间]
理解区别是关键:
| 问题 | 哪种时间? |
| -------------- | ----- |
| "我三月份做了什么决定?" | 事件时间 |
| "我上周写了什么?" | 记录时间 |
| "显示关于旧事件的最近笔记" | 两者 |
| "转型前我知道什么?" | 事件时间 |
大多数搜索使用**事件时间**,因为你在问事情何时发生。
**记录时间**对以下有用:
* 查找最近的捕获
* 审查你一直在记录什么
* 审计知识何时被记录
为什么这很重要 [#为什么这很重要]
传统搜索找内容。时间搜索找**上下文**。知识演化找**故事**。
> "我们用当时掌握的信息做了最好的决定。这就是证据。这里是我们的思维何时以及为何改变的完整记录。"
你的记忆带时间戳、有版本控制、历史可查。
下一步 [#下一步]
* [你的知识,你做主](/zh/docs/use-cases/shared-memory) -> 自由切换工具,不丢失上下文
* [看见你的专长](/zh/docs/use-cases/expertise-graph) -> 可视化你的知识
* [后台智能](/zh/docs/advanced-features) -> 知识图谱能力
# 看见你的专长 (/docs/zh/use-cases/expertise-graph)
import VideoPlayer from "@/components/ui/video-player"
import { Step, Steps } from 'fumadocs-ui/components/steps';
问题所在 [#问题所在]
你多年来积累了大量知识,但能看到它的全貌吗?
> 我知道我擅长...某些东西。技术方面。但如果有人让我描述我的专长,我会很难说清楚。全凭直觉。没有具体的东西。
知识分散在记忆、笔记和对话中,模式和连接都看不见。
解决方案 [#解决方案]
Nowledge Mem 将你的知识可视化为一个**活的图谱**。节点是你的记忆和实体。边是关系。图谱**自动构建**:后台智能在夜间自动从你的记忆中提取实体和关系。
运行**社区检测**,观察你的专长集群浮现:
工作原理 [#工作原理]
图谱自动构建 [#图谱自动构建]
你不需要手动标记或分类任何东西。后台智能读取你的记忆并提取:
* **实体**:技术、人员、概念、项目
* **关系**:它们之间如何连接
* **演化链**:你对某个话题的想法如何变化
这一切自动发生。通过任何渠道保存记忆(自动同步、浏览器扩展、Timeline、`/sum`),图谱就会自行生长。
自动实体提取需要 [Pro 许可证](/zh/docs/mem-pro)和已配置的远程 LLM。
运行社区检测 [#运行社区检测]
在右侧面板中,找到**图算法**并点击**聚类**下的 计算。
Louvain 算法分析你的知识结构并找到自然集群:
| 社区 | 大小 | 主题 |
| ----- | ------ | ------- |
| 分布式系统 | 87 条记忆 | 后端架构、扩展 |
| 团队领导 | 45 条记忆 | 指导、沟通 |
| 性能 | 62 条记忆 | 优化、分析 |
| 个人项目 | 23 条记忆 | 创意实验 |
每个集群在其节点周围获得一个彩色"气泡"。
穿越时间 [#穿越时间]
图谱底部的**时间线滑块**允许你按日期范围过滤。
拖到"2024 年 1 月",查看你当时的知识状态。向前拖动,观察新集群形成、现有集群增长、连接增多。
播放动画,观看你的专长在数月间演变。看到新兴趣何时出现,何时与现有知识连接,何时成长为完整的集群。
探索和发现 [#探索和发现]
导航图谱:
* **点击**任何节点查看其详情
* **双击**扩展邻居
* **Shift+拖动**套索选择多个节点
* **按 C** 切换社区气泡
* **按 E** 扩展所选节点的邻居
发现你从未注意到的模式:
> 每条领导力记忆都链接回调试会话。我通过教调试来领导。
你将发现什么 [#你将发现什么]
专长集群 [#专长集群]
社区检测揭示你的知识自然分组的地方:
* **核心优势**:大型、密集的集群
* **新兴领域**:小但正在增长的集群
* **桥梁**:连接多个集群的节点(往往是你最独特的技能)
知识演化 [#知识演化]
后台智能追踪你的思维如何变化:
* **周二**:"新服务用 PostgreSQL"
* **周四**:"考虑用 CockroachDB 迁移"
* **周五简报**:"你的数据库选型在演变"
这些演化链在图谱中显示为链接的节点。你能准确看到你的观点在哪里发生了转变,并追踪整个过程。
隐藏模式 [#隐藏模式]
探索并发现:
* 你从未有意识追踪的重复主题
* 看似无关的项目之间的连接
* 你独特的视角和方法
* 相关主题之间的差距
向 AI 询问你的图谱 [#向-ai-询问你的图谱]
查看你的图谱,让 AI Now 解释它:
> 基于我的知识图谱,什么职业道路最适合我?
AI Now 综合:
> 你的记忆显示深度系统知识与教学能力的独特交叉。你最核心的概念(事件驱动架构、调试)连接技术和领导力集群。考虑:Staff Engineer、Developer Advocate 或具有技术重点的 Engineering Manager。
其他可尝试的问题:
* "我最强的专长领域是什么?"
* "我的知识差距在哪里?"
* "接下来我应该探索什么主题?"
* "我的重点是如何随时间变化的?"
复合效应 [#复合效应]
记忆越多,图谱越丰富,洞察越深。
**1 个月后:**
> 我可以看到我的主要主题,但集群很小
**6 个月后:**
> 清晰的专长领域。意外的连接正在浮现。后台智能在发现我漏掉的模式。
**1 年后:**
> 我可以实际看到我的思维是如何演变的。我去年建立的连接为今年奠定了基础。
**对于绩效评估:**
> 我在评估前探索了我的图谱。在每个维度都有成长的具体例子。
下一步 [#下一步]
* [后台智能](/zh/docs/advanced-features) -> 图谱如何自动生长
* [你的知识,你做主](/zh/docs/use-cases/shared-memory) -> 自由切换工具,不丢失上下文
* [穿越时间搜索](/zh/docs/use-cases/bi-temporal) -> 时间查询和演化链
# 概述 (/docs/zh/use-cases)
import { Cards, Card } from 'fumadocs-ui/components/card';
import { Brain, Clock, FileText, Network, MessageSquare, Sparkles } from 'lucide-react';
Nowledge Mem 从你与 AI 的一切互动中学习。它自动捕获对话,实时同步会话,构建一个夜间自动生长的知识图谱。每个连接的工具都从你的完整上下文开始。
} href="/zh/docs/use-cases/shared-memory" title="你的知识,你做主">
告诉 Claude 一次,Cursor 就已经知道了。一个知识库,跨越你使用的每个 AI 工具。
} href="/zh/docs/use-cases/session-backup" title="永不丢失会话">
会话实时自动同步。Claude Code、Cursor、Codex、ChatGPT -- 每段对话都被捕获。
} href="/zh/docs/use-cases/bi-temporal" title="穿越时间搜索">
董事会问为什么选了 React Native。找到你当时相信的,而不是你现在知道的。
} href="/zh/docs/use-cases/notes-everywhere" title="你的笔记,无处不在">
Obsidian、Notion、PDF、Word 文档。一次搜索覆盖所有知识源。
} href="/zh/docs/use-cases/expertise-graph" title="看见你的专长">
图谱从你的记忆中自动构建。社区检测揭示你不知道自己拥有的专长集群。
} href="/zh/docs/ai-now" title="AI Now">
运行在本地的个人 AI 智能体。深度研究、文件分析、演示文稿——精心打造的能力,建立在你的完整知识之上。
三个核心变化 [#三个核心变化]
**自动捕获。** 浏览器扩展从 ChatGPT、Claude、Gemini 等 13+ 个平台抓取洞察。Claude Code、Cursor、Codex 的会话实时同步。你不再需要在工具之间复制粘贴。
**睡觉时它在学习。** 后台智能检测你的思维演变,从零散的记忆综合参考文章,标记矛盾。每天早上的简报 `~/ai-now/memory.md` 在你开口之前就告诉 AI 工具你在做什么。
**它跟着你走。** 一条命令连接 20+ 个 AI 智能体。自由切换工具,知识不变。
工作原理 [#工作原理]
1. **捕获** -- 浏览器扩展、会话同步,或直接输入 Timeline
2. **连接** -- 系统将它关联到你已有的所有知识
3. **生长** -- 后台智能在夜间构建演化链、结晶和标记
4. **使用** -- 任何连接的工具在需要时自动找到
知识积累在 Mem 里,不依赖任何单个工具。
开始 [#开始]
选择上面的用例了解详情,或直接前往[快速入门](/zh/docs/getting-started)。
# 你的笔记,无处不在 (/docs/zh/use-cases/notes-everywhere)
import VideoPlayer from "@/components/ui/video-player"
import { Step, Steps } from 'fumadocs-ui/components/steps';
问题所在 [#问题所在]
你多年来一直在做笔记。Obsidian。Notion。也许两个都用。
数千条记录,仔细标记,广泛链接。然而,
> 我知道我写过这个。我只是找不到它。搜索没用。标签没用。
更糟的是,AI 助手根本不知道这些笔记存在,你每次都在重复解释笔记里已有的内容。
解决方案 [#解决方案]
不取代笔记应用,而是**将它接入你的知识**。
继续使用 Obsidian、Notion、Apple Notes 或 Markdown 文件夹,就像你现在做的那样。Nowledge Mem 连接到它们,让你的笔记与你的记忆一起可搜索,通过 AI Now,以及通过 MCP 的任何 AI 工具。
有了**资料库**,你还可以拖入 PDF、Word 文档和演示文稿。所有内容都从一个地方搜索。
工作原理 [#工作原理]
连接你的笔记 [#连接你的笔记]
**Obsidian:**
1. 在 Nowledge Mem 中打开 AI Now
2. 前往 **插件** -> 启用 **Obsidian Vault**
3. 设置你的知识库路径(例如,`/Users/you/Documents/ObsidianVault`)
4. 完成。AI Now 现在可以搜索你的知识库
**Notion:**
1. 打开 AI Now -> **插件** -> 启用 **Notion**
2. 点击 **连接 Notion**
3. 在浏览器弹出窗口中授权访问
4. 你的工作区现在可访问
将文档导入资料库 [#将文档导入资料库]
将文件直接拖入 Timeline 输入框或打开资料库视图:
| 格式 | 扩展名 | 处理方式 |
| ------------ | ----------- | ----------- |
| **PDF** | .pdf | 提取文本,分段,索引 |
| **Word** | .docx, .doc | 解析为文本,分段,索引 |
| **演示文稿** | .pptx | 提取幻灯片内容并索引 |
| **Markdown** | .md | 直接解析并索引 |
索引完成后,文档内容可与你的记忆和笔记一起搜索。
跨所有内容搜索 [#跨所有内容搜索]
向 AI Now 提问任何问题:
> 我的笔记关于量子计算说了什么?
AI Now:
1. 搜索你的 Obsidian 知识库
2. 搜索你的 Notion 工作区
3. 搜索你的 Nowledge 记忆
4. 搜索你的资料库文档
5. 组合并综合结果
一个问题,覆盖所有知识源。
提炼成记忆 [#提炼成记忆]
找到有价值的笔记?将它们转变为永久记忆:
> 从这些量子计算笔记中提炼关键洞察
AI Now 创建:
* **洞察**:"量子纠错需要 O(n^2) 量子比特"
* **决定**:"近期研究专注于 NISQ 算法"
* **事实**:"IBM 在 2023 年 12 月宣称量子优势"
这些记忆现在:
* 可通过语义理解搜索
* 在知识图谱中连接
* 可供你所有 AI 工具通过 MCP 访问
* 相关时会出现在你的工作记忆简报中
Obsidian 集成 [#obsidian-集成]
设置 [#设置]
打开 Nowledge Mem
点击 AI Now 标签
在侧边栏中前往 **插件**
找到 **Obsidian Vault** 并切换开启
输入你的知识库路径
示例:`/Users/yourname/Documents/ObsidianVault`
你可以做什么 [#你可以做什么]
连接后:
* 按内容搜索笔记:*"找到我关于机器学习的笔记"*
* 阅读特定笔记:*"显示我关于项目启动的笔记"*
* 在上下文中引用:*"基于我关于 X 的 Obsidian 笔记,帮我..."*
你的知识库在本地读取。笔记永远不会上传到任何地方。Nowledge Mem 只是读取你机器上的文件。
Notion 集成 [#notion-集成]
设置 [#设置-1]
打开 AI Now -> **插件**
找到 **Notion** 并点击 **连接**
在浏览器弹出窗口中授权
选择你想连接的工作区
你可以做什么 [#你可以做什么-1]
* 搜索你的工作区:*"找到关于季度规划的页面"*
* 阅读页面内容:*"我的产品路线图页面里有什么?"*
* 交叉引用:*"比较我的 Notion 笔记与我关于 X 的记忆"*
* 结合公开信息和私人知识进行深度研究:*"量子计算的最新进展是什么?"*
Notion 使用安全的 OAuth。你完全控制 Nowledge Mem 可以访问哪些页面。随时从 Notion 设置中撤销。
内置集成 [#内置集成]
部分工具已内置 Nowledge Mem:
* **DeepChat**:在设置中开启 Nowledge Mem。你的记忆在每次对话中可用。
* **LobeHub**:从市场安装。完整 MCP 集成。
即将推出 [#即将推出]
* **Apple Notes** 集成
加入[社区](/zh/docs/community)请求集成。
下一步 [#下一步]
* [AI Now](/zh/docs/ai-now) -> 了解 AI Now 还能做什么
* [资料库](/zh/docs/library) -> 导入和搜索文档
* [看见你的专长](/zh/docs/use-cases/expertise-graph) -> 可视化你的知识图谱
* [集成](/zh/docs/integrations) -> 完整设置指南
# 永不丢失会话 (/docs/zh/use-cases/session-backup)
import VideoPlayer from "@/components/ui/video-player"
import { Step, Steps } from 'fumadocs-ui/components/steps';
问题所在 [#问题所在]
你刚刚进行了一次史诗般的调试会话。与 Claude Code 三个小时。你发现了一个竞态条件,追踪了15个文件,构建了一个带测试的完美修复。
但 AI 对话是短暂的。上下文被压缩,token 限制到达,会话过期。200 条消息的对话线程中,早期内容已经消失了。
> "我以前解决过这个完全相同的问题。我只是不记得怎么解决的了。或者在哪里。或者什么时候。"
解决方案 [#解决方案]
你的会话自动同步。Claude Code、Cursor、Codex 和 OpenCode 的对话实时捕获。ChatGPT、Claude、Gemini 的浏览器对话由扩展抓取。不需要记命令。不需要手动导出。
准备好之后,将对话线程提炼成永久、可搜索、连接图谱的记忆。
工作原理 [#工作原理]
会话自动同步 [#会话自动同步]
**Claude Code 和 Codex (npx skills):**
安装一次:
```bash
npx skills add nowledge-co/community/nowledge-mem-npx-skills
```
会话自动保存。智能体在会话结束时提炼关键洞察。
**Cursor 和 OpenCode(自动同步):**
Nowledge Mem 实时监控新对话。打开**对话线程**查看它们在你工作时出现。不需要导入步骤。
**浏览器(ChatGPT、Gemini、Claude Web):**
Exchange v2 扩展从 13+ 个 AI 聊天平台捕获对话。洞察在你聊天时流入 Mem。
**手动保存(任何 MCP 工具):**
```
/save -> 保存完整对话线程
/sum -> 将洞察提炼成记忆
```
提炼成永久知识 [#提炼成永久知识]
打开保存的对话线程并点击**提炼**。AI 阅读整个对话并提取:
* **决定**:"选择滑动窗口而不是令牌桶因为..."
* **洞察**:"异步回调中的竞态条件需要互斥锁"
* **模式**:"测试基于时间的 bug 需要模拟时钟"
* **事实**:"Redis SETNX 提供原子锁获取"
每个都成为独立的、可搜索的记忆,带有适当的标签。
后台智能自动连接 [#后台智能自动连接]
你的新记忆不会孤立存在。后台智能会:
* 将它们链接到同一代码库的以前工作
* 检测它们是否更新或矛盾了早期决策
* 将它们连接到知识图谱中的相关实体
* 在第二天早上的工作记忆简报中浮现
三个月后,同事遇到同样的 bug。你的简报在他们开口之前就提到了它。
随时搜索 [#随时搜索]
三个月后,类似的 bug 出现:
> 搜索:"支付竞态条件"
Nowledge Mem 返回完整上下文:问题、调试步骤、解决方案、测试方法。
不再重新解决已解决的问题。
捕获来源 [#捕获来源]
| 来源 | 方式 | 捕获内容 |
| --------------- | ----------------------- | ------------- |
| **Claude Code** | npx skills(自动)或 `/save` | 完整会话含代码上下文 |
| **Codex** | npx skills(自动)或 `/save` | 完整会话含代码上下文 |
| **Cursor** | 自动同步(实时监控) | 对话实时捕获 |
| **OpenCode** | 自动同步(实时监控) | 对话实时捕获 |
| **ChatGPT** | 浏览器扩展(自动捕获) | 网页聊天中的洞察 |
| **Claude Web** | 浏览器扩展(自动捕获) | 网页聊天中的洞察 |
| **Gemini** | 浏览器扩展(自动捕获) | 网页聊天中的洞察 |
| **13+ 更多** | 浏览器扩展 | 任何支持的 AI 聊天平台 |
提取的内容 [#提取的内容]
当你提炼对话线程时,AI 按类型创建记忆:
| 类型 | 示例 | 标签 |
| ------ | ------------------ | -------- |
| **决定** | "使用 Redis 进行分布式锁" | 决定、架构 |
| **洞察** | "异步回调需要仔细排序" | 洞察、调试 |
| **过程** | "重现竞态条件的步骤" | 过程、测试 |
| **事实** | "SETNX 如果键被设置返回 1" | 事实、redis |
| **经验** | "支付服务的调试会话" | 经验、项目 |
复合效应 [#复合效应]
一个对话线程有用,十个是知识库,一百个就是你的机构记忆。
> "今天初级开发者遇到了同样的 bug。发给他们我的记忆。他们20分钟修复了,而不是3小时。"
调试会话不只是对话,而是给未来自己的可复用知识。
专业提示 [#专业提示]
你不需要提炼每个对话线程。保存重要的会话:突破、架构决定、来之不易的解决方案。
对于敏感代码库,审查你正在保存的内容。对话线程可能包含专有代码或凭据。
下一步 [#下一步]
* [你的知识,你做主](/zh/docs/use-cases/shared-memory) -> 自由切换工具,不丢失上下文
* [穿越时间搜索](/zh/docs/use-cases/bi-temporal) -> 从特定时间段找到记忆
* [集成](/zh/docs/integrations) -> 每个工具的设置指南
# 你的知识,你做主 (/docs/zh/use-cases/shared-memory)
import VideoPlayer from "@/components/ui/video-player"
import { Step, Steps } from 'fumadocs-ui/components/steps';
问题所在 [#问题所在]
上周你告诉 Claude Code 项目架构。今天,又要向 Cursor 解释一遍。明天,想试试大家都在说的新工具,但又得从零开始。
这不是记忆问题,是绑定问题。你的知识被锁在上一个用的工具里。
> "我已经解释过了。为什么换个工具就得重来?"
解决方案 [#解决方案]
Nowledge Mem 是一个知识层,位于你和所有 AI 工具之间。它自动捕获你的洞察,实时同步会话,并撰写每日简报,让每个工具都从你的完整上下文开始。
一条命令连接。零工作流变化。
工作原理 [#工作原理]
一条命令连接 [#一条命令连接]
```bash
npx skills add nowledge-co/community/nowledge-mem-npx-skills
```
适用于 Claude Code、Cursor、Codex、OpenCode、OpenClaw、Alma 等 20+ 个智能体。安装四项技能:工作记忆简报、知识搜索、会话保存和洞察捕获。
安装后,智能体在会话开始时读取你的早间简报,在工作中搜索你的知识,并保存它学到的东西。
捕获自动发生 [#捕获自动发生]
你不需要记着去保存。Mem 从多个渠道捕获:
**浏览器扩展 (Exchange v2):**
扩展监控你在 ChatGPT、Claude、Gemini 等 13+ 个平台上的 AI 对话。洞察在你工作时自动捕获。
**会话自动同步:**
Claude Code、Cursor、Codex 和 OpenCode 的会话实时同步。一个 3 小时的调试会话无需你输入任何命令就被保存。
**Timeline 输入:**
输入一个想法,粘贴一个 URL,拖入一个文件。用于你想保存特定内容的时候。
**手动命令:**
```
/sum -> 将此对话总结成记忆
/save -> 保存整个对话线程
```
每个工具都知情启动 [#每个工具都知情启动]
每天早上,后台智能将简报写入 `~/ai-now/memory.md`。每个连接的 AI 工具在会话开始时读取它。
你的智能体已经知道:
* 你正在做什么
* 你最近做了什么决策
* 开放的问题和矛盾
* 你的思维如何演变
不需要重新解释。早上 9 点打开 Claude Code,它从你上次离开的地方继续。
自由切换工具 [#自由切换工具]
新工具?连接到 Mem,立刻拥有你的全部上下文。
**示例:**
你保存了:*"架构决定:使用 Redis 进行会话管理因为..."*
后来,在 Cursor 中:*"帮我添加会话处理"*
Cursor 搜索你的知识,找到 Redis 决定,应用相同模式。无需重新解释。
实际示例 [#实际示例]
**没有 Nowledge Mem:**
> 你:"帮我实现限流"
>
> Claude:"什么类型?令牌桶?滑动窗口?你的用例是什么?"
>
> 你:*\[这个月第5次解释]*
**有 Nowledge Mem:**
> 你:"帮我实现限流"
>
> Claude:*\[读取工作记忆简报,搜索你的记忆]* "根据你上个月对支付服务使用滑动窗口限流的决定,这是一个匹配你 Redis 模式的实现..."
连接方式 [#连接方式]
| 渠道 | 如何工作 | 捕获什么 |
| ------------------ | ------------ | ------------------------------------ |
| **npx skills** | 一条命令,20+ 智能体 | 工作记忆、搜索、保存、提炼 |
| **浏览器扩展** | 自动捕获 AI 对话 | 来自 ChatGPT、Claude、Gemini 等 13+ 平台的洞察 |
| **会话自动同步** | 实时监控 | Claude Code、Cursor、Codex、OpenCode 会话 |
| **MCP** | 直接协议连接 | 任何兼容 MCP 的工具 |
| **Claude Desktop** | 一键扩展 | 完整集成 |
| **内置支持** | 在设置中切换 | DeepChat、LobeHub |
复合效应 [#复合效应]
用几周后,新连接的工具立刻知道你的工作方式。偏好跨工具保持。决策持续累积。你保存过的每条洞察都可被你将来使用的每个工具找到。
价值积累在 Mem 里,不在任何单个工具上。
下一步 [#下一步]
* [永不丢失会话](/zh/docs/use-cases/session-backup) -> 自动同步和备份 AI 对话
* [穿越时间搜索](/zh/docs/use-cases/bi-temporal) -> 找到你当时知道的
* [集成](/zh/docs/integrations) -> 连接所有工具
# Get Evolves Edges (/docs/api/agent/evolves/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get EVOLVES relationships. Edge direction: older → newer.
When memory\_id is provided, returns only edges where that memory participates
(as either the older or newer node). Use this to get the full version chain
for a specific memory.
# Get Agent Status (/docs/api/agent/status/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get the Knowledge Agent's current status.
# Get Working Memory (/docs/api/agent/working-memory/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Read the Working Memory file (\~/ai-now/memory.md).
Returns today's WM by default, or an archived day's WM if date is provided.
This is the single source of truth for WM content — feed events are snapshots.
# Update Working Memory (/docs/api/agent/working-memory/put)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Write the Working Memory file from user edits.
Validates structure, writes to \~/ai-now/memory.md, and emits a feed event
with edited\_by="user" to distinguish from agent-generated updates.
# Get Community Details (/docs/api/communities/community_id/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get community details including entities and sample memories.
# Get Favorite Memories (/docs/api/favorites/memories/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get all favorite memories.
# Get Favorite Threads (/docs/api/favorites/threads/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get all favorite threads.
# Get Graph Analysis (/docs/api/graph/analysis/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get comprehensive graph analysis including community and centrality metrics.
This endpoint provides a complete overview of the graph structure, communities,
and centrality measures without triggering new calculations.
# Graph Analysis Health (/docs/api/graph/health/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Health check for graph analysis service.
Returns the status of algo extension and graph analysis capabilities.
# Cleanup Orphaned Entities (/docs/api/graph/orphans/delete)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Clean up all orphaned entities from the graph.
This safely removes Entity nodes that have no relationships:
* No MENTIONS from any Memory
* No RELATES\_TO connections to other entities
* No HAS\_LABEL relationships
This operation only affects Entity nodes and will not delete:
* Internal system nodes (GraphMeta, migrations, etc.)
* Label nodes
* Community nodes
* Memory nodes
# Find Orphaned Entities (/docs/api/graph/orphans/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Find all orphaned entities in the graph.
Orphaned entities are Entity nodes that have no relationships:
* No MENTIONS from any Memory
* No RELATES\_TO connections to other entities
* No HAS\_LABEL relationships
* No BELONGS\_TO community relationships
This only checks Entity nodes, not internal system nodes like schema versions.
# Get Graph Data (/docs/api/graph/sample/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get graph data for visualization.
# Search Graph (/docs/api/graph/search/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Enhanced graph search that finds relevant content and builds visualization data.
# Delete Label (/docs/api/labels/label_id/delete)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Delete a label and all its relationships.
# Get Label (/docs/api/labels/label_id/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get a specific label by ID.
# Update Label (/docs/api/labels/label_id/put)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Update an existing label.
# Distill Memories From Thread (/docs/api/memories/distill/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Create memories from thread content after distillation.
This endpoint actually creates memories in the database based on the distillation type.
For knowledge graph mode, it includes entity and relationship metadata.
# Delete Memory (/docs/api/memories/memory_id/delete)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Delete a memory and optionally its relationships.
# Get Memory (/docs/api/memories/memory_id/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get a specific memory by ID with associated labels.
# Update Memory (/docs/api/memories/memory_id/patch)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Update memory properties like importance, title, and content.
# Reindex Memories Bulk (/docs/api/memories/reindex/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Reindex multiple memories or all memories needing reindex.
This endpoint can work for both single and bulk reindexing:
* If memory\_ids is provided: reindex those specific memories
* If memory\_ids is None/empty: reindex all memories with reindex\_needed=True
# Search Memories (/docs/api/memories/search/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Memory search with filtering, metadata, and reasoning support.
# Reindex Search Index (/docs/api/search-index/reindex/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Rebuild the search index from Kuzu database.
This performs a full reindex of:
* All memories (with search embeddings)
* All thread messages
* All communities
* All entities
The embedding model is platform-specific:
* macOS Apple Silicon: Qwen3-Embedding via mlx-embeddings
* Windows/Linux: BGE-M3 via FastEmbed/ONNX
This is a heavy operation and should only be triggered:
* After first downloading the search embedding model
* After a data migration
* When explicitly requested by the user
# Get Search Index Status (/docs/api/search-index/status/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get status of the search index (LanceDB + hybrid search).
The embedding model is platform-specific:
* macOS Apple Silicon: Qwen3-Embedding via mlx-embeddings
* Windows/Linux: BGE-M3 via FastEmbed/ONNX
This endpoint checks:
* Whether the search embedding model is cached locally
* Whether the search index service is initialized
# Search Sources (/docs/api/sources/search/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Full-text search across source names and content.
# Delete Source (/docs/api/sources/source_id/delete)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Delete a source and its search index records.
# Get Source Detail (/docs/api/sources/source_id/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get source detail with related memories and revision chain.
# Update Source (/docs/api/sources/source_id/patch)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Update source lifecycle state.
Supported actions:
* 'reparse': Re-run parse → chunk → index pipeline
* 'mark\_stale': Mark source as stale (needs re-processing)
# Bulk Delete Threads (/docs/api/threads/bulk/delete)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Delete multiple threads and optionally their extracted memories.
# Import Bulk Threads (/docs/api/threads/import-bulk/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Import selected threads from a bulk export.
This endpoint starts a background import job and returns immediately.
Use the job\_id to poll for progress.
# Get Import Config (/docs/api/threads/import-config/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get the current import configuration.
# Update Import Config (/docs/api/threads/import-config/put)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Update import configuration.
# Parse Thread Content (/docs/api/threads/parse/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Parse thread content from various formats.
# Parse Bulk Export (/docs/api/threads/parse-bulk/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Parse all threads from a bulk export file.
This endpoint parses the export file and returns summaries of all
threads found. The full thread content is not returned here to
keep the response size manageable.
# Search Threads Full (/docs/api/threads/search/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Full thread search with message matching.
# Get Thread Summaries (/docs/api/threads/summaries/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get all thread titles/summaries.
# Delete Thread (/docs/api/threads/thread_id/delete)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Delete a thread and optionally its extracted memories.
# Get Thread (/docs/api/threads/thread_id/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get a complete thread with messages.
# Get Feed Events (/docs/api/agent/feed/events/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get feed events from time-partitioned JSONL files.
Supports two filtering modes:
* last\_n\_days: N days back from today (default)
* date\_from + date\_to: explicit date range (YYYY-MM-DD)
Both modes can be combined with event\_type, severity, and unresolved\_only.
# Get Knowledge Processing Status (/docs/api/agent/knowledge-processing/status/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get knowledge processing settings and status.
# Trigger Community Detection (/docs/api/agent/trigger/community-detection/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Manually trigger community detection on the knowledge graph.
# Trigger Crystallization (/docs/api/agent/trigger/crystallization/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Manually trigger a crystallization review.
# Trigger Daily Briefing (/docs/api/agent/trigger/daily-briefing/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Manually trigger a daily briefing.
# Trigger Insight Detection (/docs/api/agent/trigger/insight-detection/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Manually trigger proactive insight detection.
# Trigger Kg Extraction (/docs/api/agent/trigger/kg-extraction/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Manually trigger KG extraction (backfill, targeted, or scoped to specific memories).
# Get Working Memory History (/docs/api/agent/working-memory/history/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
List dates that have archived Working Memory files.
Scans \~/ai-now/memory-archive/ for YYYY/MM/YYYY-MM-DD.md files.
Returns newest-first.
# Get Entity Relationships (/docs/api/entities/entity_id/relationships/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get relationships for a specific entity.
Returns all connected entities and memories via RELATES\_TO and MENTIONS relationships.
# List Augmentation Jobs (/docs/api/graph/augmentation/jobs/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
List recent augmentation jobs.
Optionally filter by status (pending, running, completed, failed).
# Start Augmentation Job (/docs/api/graph/augmentation/start/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Start a background augmentation job.
Supports job types:
* 'community\_detection': Apply Louvain community detection
* 'pagerank\_calculation': Apply PageRank importance calculation
* 'undo\_community\_detection': Remove community detection augmentation
* 'undo\_pagerank\_calculation': Remove PageRank augmentation
# Get Augmentation State (/docs/api/graph/augmentation/state/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get the current graph augmentation state.
Returns information about which augmentations are currently applied,
their parameters, and the last augmentation timestamp.
# Expand Neighbors (/docs/api/graph/expand/node_id/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Expand neighbors of a specific node to get connected nodes and edges with depth-based traversal.
# Preview Distillation (/docs/api/memories/distill/preview/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Preview distillation results without creating memories in the database.
This endpoint processes content and returns distilled data for user review
before they decide to save the memories. Returns a cache\_key that can be used
to reuse these results in the actual distillation call.
Supports two modes:
1. Simple LLM summarization - just extract key memories
2. Knowledge graph extraction - extract entities, relationships, and memories
# Export Memory (/docs/api/memories/memory_id/export/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Export a memory in various formats.
# Toggle Memory Favorite (/docs/api/memories/memory_id/favorite/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Toggle memory favorite status.
# Get Memory Labels (/docs/api/memories/memory_id/labels/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get labels assigned to a memory.
# Get Reindex Status (/docs/api/memories/reindex/status/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get status of memories needing reindex.
# Install Bge M3 (/docs/api/models/bge-m3/install/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Download and install the search embedding model for hybrid search.
The model downloaded is platform-specific:
* macOS Apple Silicon: Qwen3-Embedding (\~400MB, 4-bit quantized)
* Windows/Linux: BGE-M3 (\~542MB, INT8 quantized)
This also deletes the old E5 embedding model to save space.
After installation, run /search-index/reindex to build the index.
# Get Bge M3 Status (/docs/api/models/bge-m3/status/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Check the status of the search embedding model for hybrid search.
The model is platform-specific:
* macOS Apple Silicon: Qwen3-Embedding (1024-dim, \~400MB)
* Windows/Linux: BGE-M3 (1024-dim, \~542MB)
Both provide high-quality multilingual embeddings for LanceDB hybrid search.
# Ingest File (/docs/api/sources/ingest/file/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Ingest a file through the full source pipeline.
Accepts a multipart file upload. The file is saved to a temp location,
then processed through ingest → parse → chunk → index.
# Ingest File Path (/docs/api/sources/ingest/file-path/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Ingest a file by local filesystem path (desktop app bridge).
Unlike the multipart upload endpoint, this accepts a path to a file
already on disk. Used by the Tauri desktop app.
# Ingest Url (/docs/api/sources/ingest/url/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Fetch a URL and ingest through the source pipeline.
Uses browse-now for authenticated content, falls back to httpx.
# Get Source Content (/docs/api/sources/source_id/content/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Read the parsed content of a source for preview.
Returns the markdown content produced by markitdown during parsing.
Works uniformly for files, URLs, and notes — all store parsed .md on disk.
# Trigger Source Extraction (/docs/api/sources/source_id/extract/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Trigger knowledge extraction from a source (Learn lifecycle).
User clicks 'Learn' button on a source in the Library.
Queues a source\_extraction task for the Knowledge Agent.
Returns 202-style response immediately (task runs in background).
# Get Source Raw (/docs/api/sources/source_id/raw/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Serve the raw source file for native preview (PDF, DOCX, etc).
# Refetch Source (/docs/api/sources/source_id/refetch/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Re-fetch a URL source's content using the browser and re-parse.
Useful when the initial fetch captured an SPA shell or stale content.
Only works for URL-type sources.
# Discover Conversations (/docs/api/threads/conversations/discover/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Discover conversation files from AI coding assistants.
Scans file system for conversation files from Claude Code, Codex, Cursor, and OpenCode.
# Export Conversation Raw (/docs/api/threads/conversations/export-raw/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Export a raw conversation file as markdown or JSON without importing.
Parses the session file using the same parsers as import, but returns
formatted content directly instead of creating a thread.
# Import Conversation (/docs/api/threads/conversations/import/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Import a conversation file into Nowledge Mem.
Converts external conversation formats (Claude Code, Codex, Cursor, OpenCode) into threads.
# Hide Project (/docs/api/threads/import-config/hide-project/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Hide a project from the browse view.
# Hide Session (/docs/api/threads/import-config/hide-session/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Hide a session from the browse view.
# Unhide Project (/docs/api/threads/import-config/unhide-project/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Unhide a project.
# Unhide Session (/docs/api/threads/import-config/unhide-session/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Unhide a session.
# Save Session (/docs/api/threads/sessions/save/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Save coding session(s) as conversation thread(s).
Auto-detects sessions from project\_path. Creates new thread or appends to existing
(with deduplication). Supports Claude Code and Codex.
Args:
request: Session save request with client, project\_path, and options
Returns:
SessionSaveResponse with results for each processed session
# Append Messages To Thread (/docs/api/threads/thread_id/append/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Append messages to existing thread (for MCP integration).
Supports two modes:
1. Direct messages: `{"messages": [...]}`
2. File-based: `{"file_path": "...", "format": "auto"}`
Optional controls:
* `deduplicate` (default: true)
* `idempotency_key` (string; used to derive stable external\_ids)
# Get Thread Coverage (/docs/api/threads/thread_id/coverage/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Read-only coverage report for debugging progress issues.
# Export Thread (/docs/api/threads/thread_id/export/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Export a thread in various formats.
# Toggle Thread Favorite (/docs/api/threads/thread_id/favorite/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Toggle favorite status for a thread.
# Start Watcher (/docs/api/threads/watcher/start/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Start the session watcher for auto-importing sessions.
# Get Watcher Status (/docs/api/threads/watcher/status/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get the current status of the session watcher.
# Stop Watcher (/docs/api/threads/watcher/stop/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Stop the session watcher.
# Delete Feed Event (/docs/api/agent/feed/events/event_id/delete)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Soft-delete a feed event by marking deleted=True in the JSONL file.
# Persist Question (/docs/api/agent/feed/input/persist-question/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Persist a question + agent response as a feed event (JSONL).
Called by the frontend after agent streaming completes for questions.
Does NOT create a memory — only writes the event for timeline persistence.
# Submit Feed Input Stream (/docs/api/agent/feed/input/stream/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Stream agent processing of feed input via Wire Protocol.
This is the agent-first approach: the agent classifies input,
searches the knowledge base, and provides streaming responses.
Returns Server-Sent Events (SSE) with Wire Protocol messages:
* turn\_begin: Agent turn started
* step\_begin: New processing step
* text: Text content from agent
* thinking: Agent's reasoning (if enabled)
* tool\_call: Agent called a tool
* tool\_result: Tool returned a result
* turn\_end: Agent turn completed
# Get Job Status (/docs/api/graph/augmentation/status/job_id/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get the status of a specific augmentation job.
Returns job progress, status, and any error messages.
# Apply Memory Kg Extraction (/docs/api/memories/memory_id/extract-kg/apply/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Apply knowledge graph extraction results to a memory.
This endpoint saves the extracted entities and relationships to the graph database
and updates the memory's metadata to track the extraction.
# Preview Memory Kg Extraction (/docs/api/memories/memory_id/extract-kg/preview/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Preview knowledge graph extraction for a memory.
This endpoint extracts entities and relationships from a memory's content
using the local LLM, providing a preview without saving to the database.
# Remove Label From Memory (/docs/api/memories/memory_id/labels/label_id/delete)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Remove a label from a memory.
# Assign Label To Memory (/docs/api/memories/memory_id/labels/label_id/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Assign a label to a memory.
# Get Source Image (/docs/api/sources/source_id/images/filename/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Serve an extracted image from a source's images/ directory.
# Get Import Status (/docs/api/threads/import-bulk/job_id/status/get)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get the status of a bulk import job.
# Resolve Event (/docs/api/agent/feed/events/event_id/resolve/post)
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Resolve an action-required feed event and optionally execute graph mutations.
Resolution marks the event as resolved in the JSONL file.
Action (optional) executes a graph mutation:
* delete\_memory: Delete all specified memories
* keep\_newer: Delete the first (older) memory, keep the rest