# Background Intelligence (/docs/advanced-features) import VideoPlayer from "@/components/ui/video-player" import { Step, Steps } from 'fumadocs-ui/components/steps'; You save a decision about PostgreSQL in January. In July, you record that you're migrating to CockroachDB. Six months apart, different contexts. Nowledge Mem links them, tracks the evolution, and the next time you search for either, both appear with the full trail of how your thinking changed. This runs in the background. You open the app and the connections are there.
Knowledge Graph Background Intelligence requires a Pro license and a configured Remote LLM. Enable it in **Settings > Knowledge Processing**. Knowledge Graph [#knowledge-graph] Every memory becomes a node in a graph. The system extracts entities (people, technologies, concepts, projects) and maps how they relate to each other and to your existing knowledge. The result: search "distributed systems" and find your memory about "Node.js microservices." The words don't match. The meaning does. With Background Intelligence enabled, extraction runs automatically for new memories. You can also trigger it manually for older ones. What Gets Extracted [#what-gets-extracted] When a memory is processed, the LLM identifies: * **Entities**: people, technologies, concepts, organizations, projects * **Relationships**: how those entities connect * **Links to existing knowledge**: connections to memories already in the graph Trigger extraction for any memory by clicking **Knowledge Graph** on its card. Distill with Knowledge Graph Knowledge Evolution [#knowledge-evolution] When you save something new about a topic you've written about before, the system detects the relationship and creates a version link: | Link type | Meaning | Example | | -------------- | ---------------------- | ------------------------------------------------------------------- | | **Replaces** | You changed your mind | "Use CockroachDB" replaces "Use PostgreSQL" | | **Enriches** | You added depth | "React 19 adds a compiler" enriches "React 18 concurrent rendering" | | **Confirms** | Independent agreement | Two separate reviews recommend the same library | | **Challenges** | Contradiction detected | Your March assessment disagrees with your October conclusion | You can trace how your understanding of any topic changed over time. Community Detection [#community-detection] Graph algorithms find natural clusters in your knowledge: groups of tightly connected memories that form coherent topics. Your graph might reveal clusters for "React Patterns," "API Design," and "Database Optimization." A map of your expertise you never drew by hand. In **Graph View**, click **Compute** to run community detection. Graph Algorithm Compute Visual Exploration [#visual-exploration] Your knowledge as an interactive network. Click a memory to see its connections. Zoom into clusters. Follow links between topics you never thought to compare.
The timeline slider filters by date range. Watch how your knowledge in a domain grew over weeks or months. What the System Discovers [#what-the-system-discovers] The graph is the foundation. On top of it, Background Intelligence actively analyzes your knowledge and surfaces findings in the Timeline. Insights [#insights] Insights are connections you wouldn't have found on your own. * **Cross-domain links.** In March you noted that JWT refresh tokens were causing race conditions in the payment service. In September you chose the same token rotation pattern for a new auth service. The system catches it: same failure pattern, different project. * **Temporal patterns.** "You've revisited this database migration decision 3 times in 2 months." Maybe it's time to commit. * **Forgotten context.** "Your March assessment contradicts the approach you chose in October." The system remembers what you wrote, even when you don't. Every insight cites its sources so you can trace the reasoning. One insight that changes how you think beats ten that state the obvious. Strict quality gates keep the noise out. Crystals [#crystals] Five memories about React patterns saved over three months. Scattered across your timeline. Hard to piece together. A crystal synthesizes them into one reference article. Sources are cited. When you save new related information, the crystal updates. Crystals appear when the system has enough material to say something useful. You don't request them. Flags [#flags] Sometimes the system finds problems, not connections: | Flag | What it means | Example | | ---------------------- | -------------------------------- | ----------------------------------------------------------------- | | **Contradiction** | Two memories disagree | "Use JWT tokens" vs. "Session cookies are more secure" | | **Stale** | Newer knowledge supersedes older | A deployment guide from 6 months ago, overwritten by recent notes | | **Needs verification** | Strong claim, no corroboration | A single memory making an assertion with no supporting evidence | Each flag appears in the Timeline. You can dismiss it, acknowledge it, or link it to a resolution. Working Memory [#working-memory] Each morning, a briefing lands at `~/ai-now/memory.md`: * **Active topics** based on recent activity * **Unresolved flags** needing attention * **Recent changes** in your knowledge base * **Priority items** by frequency and recency Any AI tool connected via MCP reads this file at session start. Your coding assistant already knows what you're working on before you say anything. You can edit the file directly. Your changes are respected. Your Working Memory at `~/ai-now/memory.md` is readable by any connected AI tool via MCP. Coding assistants, writing tools, and other agents check it before starting a task. Configuration [#configuration] Control background processing in **Settings > Knowledge Processing**: Memory Processing Settings | Setting | Default | What it controls | | --------------------------- | ----------------- | ----------------------------------------------------- | | **Background Intelligence** | Off | Master toggle for all background processing | | **Daily Briefing** | On (when enabled) | Morning Working Memory generation | | **Briefing Hour** | 8 | What hour the daily briefing runs (local time) | | **Auto Extraction** | On (when enabled) | Automatic knowledge graph enrichment for new memories | On Linux servers, configure via CLI: ```bash nmem config settings set backgroundIntelligence true nmem config settings set autoDailyBriefing true nmem config settings set briefingHour 8 ``` Next Steps [#next-steps] * **[Getting Started](/docs/getting-started)**: The Timeline, document import, and all ways to add knowledge * **[Integrations](/docs/integrations)**: Connect your AI tools via MCP and browser extensions * **[Troubleshooting](/docs/troubleshooting)**: Common issues and solutions # AI Now (/docs/ai-now) import { Callout } from 'fumadocs-ui/components/callout'; import { Step, Steps } from 'fumadocs-ui/components/steps'; import { Tab, Tabs } from 'fumadocs-ui/components/tabs'; import { Telescope, FileText, Pencil, Presentation, Download, Plane, FastForward } from 'lucide-react'; import VideoPlayer from "@/components/ui/video-player"; AI Now is a personal AI agent running on your machine. It has full access to your knowledge base — every decision, insight, and document you've saved. It connects to Obsidian, Notion, Apple Notes, and any service through plugins. It's not a chatbot. It has purpose-built capabilities: deep multi-source research, file and data analysis with visualization, presentation creation with live preview and export, and travel planning. Each one draws from your full context — your past decisions, your patterns, your history.
AI Now requires a configured **Remote LLM**. Go to **Settings** → **Remote LLM** to set up, refer to [Remote LLMs](/docs/usage#remote-llms) for details. Capabilities [#capabilities] | Category | What it does | | ---------------------- | ------------------------------------------------------------ | | **Memory Search** | Finds relevant memories with semantic understanding | | **Deep Research** | Multi-source research combining your memories and web search | | **File Analysis** | Analyzes Excel, CSV, Word, PDF files you provide | | **Data Visualization** | Generates charts from your data | | **Presentations** | Creates slides with live preview and PowerPoint export | | **Travel Planning** | Creates interactive day-by-day itineraries | | **Integrations** | Connects to Notion, Obsidian, Apple Notes, and MCP servers | Getting Started [#getting-started] Configure Remote LLM [#configure-remote-llm] Go to **Settings** → **Remote LLM** and add your API key. Open AI Now [#open-ai-now] Click the **AI Now** tab in the sidebar, or press Cmd/Ctrl + 5. Start a Task [#start-a-task] Ask anything. AI Now searches your memories when relevant: > What architecture decisions have I made about caching? It pulls from your memories, searches the web and connected notes (Notion, Obsidian, Apple Notes), and synthesizes a single answer. You can also drop files or folders for instant analysis, request reports based on your recent work, or run a deep study on any topic. AI Now creates or updates memories as it works. Refer memories in your chat [#refer-memories-in-your-chat] Use @ to search and mention specific memories in your conversation. Deep Research [#deep-research] For comprehensive research, AI Now runs parallel sub-tasks across multiple sources and synthesizes the results. Deep Research Click the Research toggle in the AI Now chat interface. How It Works [#how-it-works] Ask a research question: > Research the current state of quantum error correction AI Now will: 1. Search your memories for existing knowledge on the topic 2. Search the web from multiple angles 3. Synthesize findings into a single answer 4. Cite sources with reliability indicators Skills [#skills] Skills are specialized capabilities you enable for specific tasks. | Skill | What it enables | | ------------------------ | ----------------------------------------------------- | | **Documents** | Excel/CSV analysis, chart generation, file operations | | **Presentation Creator** | Slide generation with live preview and export | | **Travel Planner** | Interactive itinerary creation | Enable skills in **AI Now** → **Plugins** → **Skills**. File Analysis [#file-analysis] Attach files or folders to your conversation for analysis. Toggle the Documents SKILL in AI Now Plugins to enable. Supported Files [#supported-files] | Type | Extensions | What AI Now Does | | ---------------- | ------------------- | -------------------------------------------------- | | **Spreadsheets** | .xlsx, .xls, .csv | Analyzes data, finds patterns, generates charts | | **Documents** | .docx, .doc, .pdf | Summarizes, extracts key points, answers questions | | **Code** | .py, .js, .ts, etc. | Reviews, explains, suggests improvements | Example [#example] 1. Attach `sales_q4.xlsx` 2. Ask: "What are the top 3 trends in this data?" 3. AI Now analyzes and generates visualizations Whole folders work too. Data Analysis Presentations [#presentations] Toggle the Presentation SKILL in AI Now Plugins to enable. > Create a presentation based on our above study and research, include some charts or diagrams to support the insights AI Now generates slides with structure, charts, and insights from your conversation. Presentation Creation Refine with follow-up requests ("Make the third slide more visual", "Add a slide about customer segments"), or click Edit to edit directly. Export as PowerPoint with the PPTX button. Travel Planning [#travel-planning] Toggle the Travel Planner SKILL in AI Now Plugins to enable. > Plan a 5-day trip to Tokyo focusing on food and culture AI Now generates an interactive day-by-day itinerary using your recent memories and web research as context. Travel Planning Plugins [#plugins] Extend AI Now with connections to your other apps. Built-in Plugins [#built-in-plugins] Obsidian [#obsidian] 1. Go to **AI Now** → **Plugins** 2. Enable **Obsidian** 3. Set your vault path AI Now can now search and read your Obsidian notes alongside your memories. Notion [#notion] 1. Go to **AI Now** → **Plugins** 2. Enable **Notion** 3. Click **Connect** and authorize in the browser AI Now can search your Notion pages and databases. Apple Notes (macOS) [#apple-notes-macos] 1. Go to **AI Now** → **Plugins** 2. Enable **Apple Notes** 3. Grant permission when prompted Custom MCP Plugins [#custom-mcp-plugins] AI Now supports Model Context Protocol for custom integrations. Go to **AI Now** → **Plugins** → **Custom Plugins** Click **Add MCP Server** Configure the server (stdio command or HTTP endpoint) Click **Test Connection** to verify Enable the plugin MCP plugins with OAuth (GitHub, Slack, etc.) are detected automatically and prompt for authorization. Session Management [#session-management] Conversations are saved automatically. Click a previous session to resume, or create new sessions for parallel workstreams. Each session maintains its own history. Auto-Approve Mode [#auto-approve-mode] Enable Auto to skip confirmation prompts for file operations and other actions. Auto-Approve grants AI Now permission to act without asking. Only enable for trusted workflows. Tips [#tips] * **Be specific**: "What did we decide about the database migration last month?" beats "database stuff" * **Attach context**: drop files or mention notes with `@` for better results * **Use sessions**: separate sessions for different projects or topics Next Steps [#next-steps] * **[Remote LLM Setup](/docs/usage#remote-llms)**: Configure your AI provider * **[Integrations](/docs/integrations)**: Connect your AI tools * **[Background Intelligence](/docs/advanced-features)**: How your knowledge grows on its own # Nowledge Mem CLI (/docs/cli) import { Step, Steps } from 'fumadocs-ui/components/steps'; import VideoPlayer from "@/components/ui/video-player"; The `nmem` CLI gives you terminal access to your Nowledge Mem knowledge base. Search memories, browse threads, read and edit Working Memory, explore the knowledge graph, and view your activity feed — all from the shell. Installation [#installation] Option 1: Standalone PyPI Package [#option-1-standalone-pypi-package] Install on any machine — works with a local or remote Nowledge Mem server: ```bash pip install nmem-cli # or with uv uv pip install nmem-cli # or run without installing uvx --from nmem-cli nmem --help ``` **Requirements:** Python 3.11+, Nowledge Mem running locally or reachable remotely. The standalone package lets you reach your Nowledge Mem from servers, CI/CD pipelines, or remote workstations. See [Access Mem Anywhere](/docs/remote-access). View on [PyPI](https://pypi.org/project/nmem-cli/). Option 2: Bundled with Desktop App [#option-2-bundled-with-desktop-app] macOS [#macos] Go to **Settings → Preferences → Developer Tools** and click **Install CLI**. Installs to `~/.local/bin/nmem`. Make sure `~/.local/bin` is on your `PATH`: ```bash echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.zshrc && source ~/.zshrc ```
Windows [#windows] The CLI is automatically available after app installation. Open a **new terminal window** to use `nmem`. Linux [#linux] Included with deb/rpm packages. The binary is placed in `/usr/local/bin/nmem`.
*** Quick Start [#quick-start] ```bash nmem status # Check connection nmem m search "project notes" # Search memories nmem m add "Key insight" --title "Learning" nmem wm # Read today's Working Memory nmem f --days 1 # Today's activity nmem g expand # Explore graph connections nmem tui # Interactive terminal UI ``` *** Global Options [#global-options] | Option | Description | | ----------------- | ------------------------------------------- | | `-j, --json` | Machine-readable JSON output | | `--api-url ` | API URL (default: `http://127.0.0.1:14242`) | | `-v, --version` | Show version | | `-h, --help` | Show help | **Aliases:** `m` = memories · `t` = threads · `wm` = working-memory · `g` = graph · `f` = feed · `c` = communities *** Memory Commands (nmem m) [#memory-commands-nmem-m] List memories [#list-memories] ```bash nmem m # Recent 10 memories nmem m -n 50 # List 50 nmem m --importance 0.7 # Minimum importance filter ``` Search [#search] ```bash nmem m search "authentication patterns" nmem m search "API design" --importance 0.8 nmem m search "deploy" -l devops -l backend # Filter by labels (AND) nmem m search "sprint" --mode deep # Graph + LLM-enhanced results ``` **Bi-temporal search** — distinguish *when something happened* from *when you saved it*: ```bash nmem m search "database decision" --event-from 2025-01 --event-to 2025-06 nmem m search "meeting notes" --recorded-from 2026-01-01 ``` | Option | Description | | -------------------- | --------------------------------------------------------- | | `-n` | Max results | | `-l, --label` | Filter by label (repeatable) | | `--importance` | Minimum importance (0–1) | | `--mode` | `normal` (default, fast) or `deep` (graph + LLM-enhanced) | | `--event-from/to` | When the fact *happened* (YYYY, YYYY-MM, or YYYY-MM-DD) | | `--recorded-from/to` | When it was *saved* to Nowledge Mem (YYYY-MM-DD) | Add [#add] ```bash nmem m add "We chose PostgreSQL for task events" nmem m add "Prefer functional components in React" \ --title "Frontend conventions" \ --unit-type preference \ --importance 0.8 \ -l frontend -l react # Record when something actually happened (bi-temporal) nmem m add "Decided to sunset the legacy API" \ --unit-type decision \ --event-start 2025-11 \ --when past ``` | Option | Description | | ------------------ | ------------------------------------------------------------------------------ | | `-t, --title` | Memory title | | `-i, --importance` | Importance 0–1 | | `-l, --label` | Add label (repeatable) | | `--unit-type` | `fact` `preference` `decision` `plan` `procedure` `learning` `context` `event` | | `--event-start` | When it happened (YYYY, YYYY-MM, YYYY-MM-DD) | | `--event-end` | End of a time range | | `--when` | `past` `present` `future` `timeless` (default: timeless) | Show [#show] ```bash nmem m show nmem m show --content-limit 500 ``` Update [#update] ```bash nmem m update --title "New title" nmem m update --importance 0.9 nmem m update --content "Updated content" ``` Delete [#delete] ```bash nmem m delete nmem m delete -f # Skip confirmation nmem m delete # Multiple IDs ``` *** Thread Commands (nmem t) [#thread-commands-nmem-t] List and search [#list-and-search] ```bash nmem t # Recent 20 threads nmem t -n 50 nmem t search "architecture decisions" ``` Show [#show-1] ```bash nmem t show nmem t show -m 50 # Show up to 50 messages nmem t show --content-limit 200 ``` Create [#create] ```bash # From text nmem t create -t "Quick note" -c "Remember to review the API changes" # From a file nmem t create -t "Meeting notes" -f notes.md # With structured messages nmem t create -t "Chat session" \ -m '[{"role":"user","content":"Hello"},{"role":"assistant","content":"Hi!"}]' # With a stable ID (idempotent — safe to re-run) nmem t create -t "OpenClaw session" --id "openclaw-abc123-session" ``` Append [#append] Add messages to an existing thread. Safely idempotent — duplicate messages are filtered by content hash or external ID. ```bash # Single message nmem t append -c "Follow-up note" # Structured messages nmem t append \ -m '[{"role":"user","content":"Question"},{"role":"assistant","content":"Answer"}]' # With idempotency key (safe for retries / repeated hook fires) nmem t append \ -m '[{"role":"user","content":"msg"}]' \ --idempotency-key "oc-batch-session-001" ``` Save Claude Code / Codex session [#save-claude-code--codex-session] ```bash nmem t save --from claude-code # Save Claude Code session nmem t save --from codex # Save Codex session nmem t save --from codex -s "Summary" # With session summary ``` | Option | Description | | --------------- | --------------------------------------------- | | `--from` | `claude-code` or `codex` (required) | | `-p, --project` | Project directory path (default: current dir) | | `-m, --mode` | `current` (latest) or `all` sessions | | `--session-id` | Specific session ID (Codex only) | | `-s, --summary` | Brief session summary | | `--truncate` | Truncate large tool results (>10KB) | Delete [#delete-1] ```bash nmem t delete nmem t delete -f # Force nmem t delete --cascade # Also delete associated memories ``` *** Working Memory (nmem wm) [#working-memory-nmem-wm] Working Memory is the AI-generated daily briefing — focus areas, open questions, and recent activity. The Knowledge Agent updates it each morning. Read [#read] ```bash nmem wm # Today's Working Memory nmem wm --date 2026-02-12 # Archived date nmem wm history # List available archived dates ``` Edit [#edit] ```bash nmem wm edit # Opens $EDITOR nmem wm edit -m "## Focus Areas\n- Ship v0.6" # Set directly ``` Patch a section (non-destructive) [#patch-a-section-non-destructive] Replace or append to one section without touching the rest of the document: ```bash # Replace a section nmem wm patch --heading "## Focus Areas" --content "- Finish OpenClaw plugin release" # Append to a section nmem wm patch --heading "## Notes" --append "Reminder: deploy to staging tonight" ``` The heading is matched case-insensitively and partially — `"Focus"` matches `"## Focus Areas"`. *** Graph Commands (nmem g) [#graph-commands-nmem-g] Expand graph neighborhood [#expand-graph-neighborhood] Explore connected memories, entities, crystals, and source documents around a given memory: ```bash nmem g expand nmem g expand --depth 2 # Two hops out nmem g expand -n 10 # Limit neighbors per hop ``` Show EVOLVES version chain [#show-evolves-version-chain] See how a memory has been refined or superseded over time: ```bash nmem g evolves ``` *** Feed (nmem f) [#feed-nmem-f] The activity feed shows what was saved, learned, synthesized, or ingested — chronologically. ```bash nmem f # Last 7 days (high-signal events) nmem f --days 1 # Today only nmem f --days 30 # Last 30 days nmem f --type crystal_created # Only crystal synthesis events nmem f --from 2026-02-10 --to 2026-02-14 # Exact date range nmem f --all # Include low-signal background events nmem f -n 50 # Limit events (default: 100) ``` | Option | Description | | ---------------- | ------------------------------------------------ | | `--days` | How many days back (default: 7; use 1 for today) | | `--type` | Filter by event type | | `-n, --limit` | Max events to fetch (default: 100) | | `--all` | Include low-signal background events | | `--from`, `--to` | Exact date range (YYYY-MM-DD) | **Event types:** `memory_created` · `crystal_created` · `insight_generated` · `source_ingested` · `source_extracted` · `daily_briefing` · `url_captured` *** Knowledge Communities (nmem c) [#knowledge-communities-nmem-c] Browse topic clusters automatically detected in your knowledge graph: ```bash nmem c # List communities nmem c -n 20 nmem c show # Show community details (entities, memories) nmem c detect # Trigger community detection (background) ``` *** Configuration & Models [#configuration--models] Embedding model [#embedding-model] ```bash nmem models status # Check current model status nmem models download # Download the embedding model nmem models reindex # Rebuild the search index ``` LLM provider [#llm-provider] ```bash nmem config provider list nmem config provider set openai --api-key sk-xxx --model gpt-4o nmem config provider test ``` Processing settings [#processing-settings] ```bash nmem config settings # Show all settings nmem config settings set briefingHour 8 # Change morning briefing time ``` License [#license] ```bash nmem license status nmem license activate nmem license deactivate # Deactivate license on this device ``` *** Remote Access [#remote-access] ```bash # LAN / private network export NMEM_API_URL=http://192.168.1.100:14242 nmem status # Cloudflare tunnel (from desktop app: Settings → Access Mem Anywhere) export NMEM_API_URL=https:// export NMEM_API_KEY=nmem_... nmem m search "notes" # One-off without env vars nmem --api-url https:// status ``` | Variable | Description | Default | | -------------- | --------------------- | ------------------------ | | `NMEM_API_URL` | API server URL | `http://127.0.0.1:14242` | | `NMEM_API_KEY` | API key (Bearer auth) | *(unset)* | Full guide: [Access Mem Anywhere](/docs/remote-access). *** JSON Output [#json-output] Add `--json` (or `-j`) before the subcommand for machine-readable output: ```bash nmem --json m search "API design" | jq '.memories[0].id' nmem --json m add "Note" | jq -r '.id' nmem --json f --days 1 | jq '.events[].title' ``` Search response [#search-response] ```json { "query": "API design", "total": 3, "search_mode": "fast_bm25_vector", "memories": [ { "id": "abc123-def456-...", "title": "REST API versioning decision", "content": "We use /v1/ prefix for all public endpoints...", "score": 0.91, "relevance_reason": "Text Match (89%) + Semantic Match (73%) | decay[imp:high]", "importance": 0.8, "labels": ["architecture", "api"], "event_start": "2025-09", "temporal_context": "past", "source": "cli" } ] } ``` Feed response [#feed-response] ```json { "events": [ { "id": "evt-...", "event_type": "memory_created", "severity": "info", "title": "Memory or event title", "description": "Summary text...", "metadata": { "source": "claude", "unit_type": "fact" }, "related_memory_ids": ["..."], "created_at": "2026-02-20T02:35:28+00:00" } ] } ``` Error response [#error-response] ```json { "error": "api_error", "status_code": 404, "detail": "Memory not found" } ``` *** Status and Statistics [#status-and-statistics] ```bash nmem status # nmem v0.6.2 # status ok # api http://127.0.0.1:14242 # database connected nmem stats # Database Statistics # memories 83 # threads 27 # entities 248 # labels 177 # communities 32 ``` *** AI Agent Integration [#ai-agent-integration] The `--json` flag and stable exit codes make `nmem` easy to drive from AI agents. ```bash # Search for context before responding nmem --json m search "authentication flow" | jq '.memories[:3]' # Save an insight nmem m add "Rate limiting is per-user, not per-IP" \ --unit-type learning --importance 0.8 -l backend # Save a decision with when it was made nmem m add "Chose Postgres over MySQL for task events" \ --unit-type decision --event-start 2026-02 -l architecture # Browse what was worked on last week nmem --json f --days 7 | jq '.events[].title' # Create a session thread backup nmem t create -t "Debug session $(date +%Y%m%d)" \ -m '[{"role":"user","content":"Investigate auth failures"},{"role":"assistant","content":"Found rate limit issue"}]' ``` *** TUI [#tui] An interactive terminal UI for browsing memories, threads, and the knowledge graph: ```bash nmem tui ``` nmem tui main nmem tui memory nmem tui thread nmem tui graph *** Troubleshooting [#troubleshooting] **"command not found: nmem"** * PyPI install: `pip install nmem-cli` (Python 3.11+) * Run without installing: `uvx --from nmem-cli nmem --help` * macOS desktop: Settings → Preferences → Developer Tools → Install CLI → then ensure `~/.local/bin` is on your PATH * Windows: open a new terminal after app installation **"Cannot connect to server"** 1. Ensure Nowledge Mem is running 2. Try: `nmem --api-url http://127.0.0.1:14242 status` 3. Check for proxy or VPN blocking localhost # Community & Support (/docs/community) import { Card as RawCard, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card" import { MessageSquare, Twitter, Github, Mail, Users, BookOpen, MessageCircle, AlertTriangle, Lightbulb } from "lucide-react" Community Channels [#community-channels] Get Support [#get-support] Documentation [#documentation] * **[Getting Started](/docs/getting-started)** - Set up and create your first memories * **[Integrations](/docs/integrations)** - Connect with AI tools via MCP and browser extensions * **[Background Intelligence](/docs/advanced-features)** - Knowledge graph, insights, crystals, and working memory * **[Troubleshooting](/docs/troubleshooting)** - Common issues and solutions Report Issues & Request Features [#report-issues--request-features] Email Support [#email-support] For direct assistance, reach out to our team:
[hello@nowledge-labs.ai](mailto:hello@nowledge-labs.ai)
Pro plan users receive access to a dedicated Pro Discord channel and direct IM support. [Learn more about Pro](/docs/mem-pro). # Getting Started (/docs/getting-started) import VideoPlayer from "@/components/ui/video-player"; import { Step, Steps } from 'fumadocs-ui/components/steps'; The Timeline [#the-timeline] Open Nowledge Mem. You see one input and a timeline below it. Nowledge Mem Timeline Save a thought [#save-a-thought] Type a decision, an insight, anything worth keeping. Hit enter. Nowledge Mem handles the rest: title, key concepts, graph connections. You just write. Open the Graph view later and you'll see it already linked to related memories. Ask a question [#ask-a-question] Type a question: *"What did I decide about authentication last month?"* The answer comes from **your own knowledge**: not the internet. Every question searches your full memory and synthesizes an answer from what you've written and saved. Drop a URL or file [#drop-a-url-or-file] Paste a URL. The page gets fetched, parsed, and stored as a searchable source. Drop a PDF, a Word doc, a presentation. Same treatment. Each input grows your knowledge base. Nowledge Mem Timeline Connect Any Tool [#connect-any-tool] One command installs the full skill set: ```bash npx skills add nowledge-co/community/nowledge-mem-npx-skills ``` Works with Claude Code, Cursor, Codex, OpenCode, OpenClaw, Alma, and 20+ other agents. After setup, your agent starts each session with your context, searches your knowledge mid-task, and saves what it learns. If OpenClaw is your first tool, use the 5-minute guide: * **[OpenClaw in 5 Minutes](/docs/integrations/openclaw)** For a lighter setup, open **Settings > Preferences** and install the CLI skill from **Developer Tools**. This gives agents core search and recall without the full autonomous workflow. Install Skills from Settings Or configure MCP directly [#or-configure-mcp-directly] For any MCP-compatible tool, add this to its MCP settings: ```json { "mcpServers": { "nowledge-mem": { "url": "http://127.0.0.1:14242/mcp", "type": "streamableHttp" } } } ``` Claude Desktop [#claude-desktop] [Download the extension](/docs/integrations#claude-desktop). One-click install, no config. See [Integrations](/docs/integrations) for all tool-specific guides. More Ways In [#more-ways-in] * **AI conversations**: the [browser extension](/docs/integrations#browser-extension) captures insights from ChatGPT, Claude, Gemini, and 13+ platforms * **Thread files**: [import](/docs/integrations#thread-file-import) exported conversations from Cursor, ChatGPT, or ChatWise * **Manual**: create memories in the Memories view with **+ Create**, or from any terminal with `nmem m add` ([CLI reference](/docs/cli)) Come Back Tomorrow [#come-back-tomorrow] Here's what happens after a few days of normal use: **Tuesday** — you save a decision: "Using PostgreSQL for the new service." **Thursday** — you mention CockroachDB as a possible migration target. **Friday morning** — your briefing at `~/ai-now/memory.md` notes: "Your database thinking is evolving. PostgreSQL decision (Tuesday) now in tension with CockroachDB consideration (Thursday)." You didn't connect these yourself. Mem did. This is **Background Intelligence** at work: * **Knowledge evolution.** Mem detects when your thinking on a topic changes and links the versions together, with the full trail. * **Crystals.** When enough memories cover the same ground, Mem synthesizes them into a reference article you can cite. * **Flags.** Contradictions between your past and present thinking surface in the Timeline. You decide what to do. * **Working Memory.** A daily briefing at `~/ai-now/memory.md`. Your AI tools read it at session start — they know what you're working on before you say anything. None of this requires action from you. It shows up in the Timeline. Background intelligence requires a [Pro license](/docs/mem-pro) and a configured Remote LLM. Next Steps [#next-steps] * **[Using Nowledge Mem](/docs/usage)**: Daily workflow: search, briefings, and how your tools use your knowledge * **[AI Now](/docs/ai-now)**: Personal AI with full access to your knowledge base * **[Background Intelligence](/docs/advanced-features)**: Knowledge graph, insights, crystals, and daily briefings * **[Integrations](/docs/integrations)**: Connect your AI tools * **[Access Mem Anywhere](/docs/remote-access)**: Reach your Mem from other laptops, agent nodes, and browser tools with URL + API key # Nowledge Mem (/docs) import { Card as RawCard, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card" import { ArrowRight, Zap, Bot, Network, Sparkles } from "lucide-react" import VideoPlayer from "@/components/ui/video-player" Your AI tools forget everything. Nowledge Mem doesn't. Save a decision, an insight, a breakthrough — it links to everything else you know. A knowledge graph grows as you work, tracking how your thinking evolves. Overnight, the system finds connections you missed and writes your AI tools a morning briefing. Every tool you connect shares the same knowledge. Claude Code, Cursor, Codex, ChatGPT, whatever comes next. Explain something once. Every tool knows it.
Connect Any Tool [#connect-any-tool] Works with anything that speaks MCP, plus browser extensions and direct plugins. Skill-based plugin with autonomous memory access First-time setup guide for Nowledge Mem memory plugin MCP integration for memory search and creation One-click extension installation Capture conversations from ChatGPT, Gemini, and 13+ platforms Import Your Documents [#import-your-documents] Drop a PDF, Word doc, or presentation into the Library. It gets parsed and indexed alongside your memories. When you ask a question in the Timeline, the answer draws from both. Local-First Privacy [#local-first-privacy] Everything runs on your device. No cloud, no accounts. You can connect a remote LLM when you want stronger processing, but your data never touches Nowledge servers. Get started in minutes Your first five minutes # Installation (/docs/installation) import { DragToApplicationsAnimation } from '@/components/docs/drag_install'; import { InstallationSteps } from '@/components/docs/installation-steps'; import { Step, Steps } from 'fumadocs-ui/components/steps'; import { Tab, Tabs } from 'fumadocs-ui/components/tabs'; import { ExternalLink, Download } from 'lucide-react'; import { Button } from '@/components/ui/button'; Nowledge Mem is currently in **private alpha**. To get download access: * **Join the waitlist**: Submit your email [here](https://nowled.ge/alpha) and we'll send you the download link in hours. * **Get instant access**: [Pro plan](/pricing) subscribers receive immediate download access Already have access? You'll find a download link in your alpha invitation email. Check the **spam** inbox if you don't see it. System Requirements [#system-requirements] Minimum system requirements: | Requirement | Specification | | -------------------- | ------------------------------------------------------------------------------------------- | | **Operating System** | macOS 15 or later with Apple Silicon

Windows 10 or later | | **Memory (RAM)** | 16 GiB minimum | | **Disk Space** | 10 GiB available | | **Network** | If using a proxy, ensure it bypasses `127.0.0.1` and `localhost` | **Linux servers** are supported in headless mode. See the **[Linux Server Deployment](/docs/server-deployment)** guide to run Nowledge Mem on servers without a desktop environment. Installation Steps [#installation-steps] Step 1: Install the Application [#step-1-place-app] Drag Nowledge Mem to your `/Applications` folder. Install from the Microsoft Store. Search for "Nowledge Mem" in the [Microsoft Store](https://apps.microsoft.com/detail/9ntrknn2w5dq?hl=en-us\&gl=US\&ocid=pdpshare), or click below button to open the Microsoft Store. Click the **Install** button to install Nowledge Mem. Microsoft Store Install Step 2: Launch the Application [#step-2-first-boot] Double-click the Nowledge Mem icon in your Applications folder to launch the app for the first time. If the app takes too long to start or shows errors: * **Service timeout**: If you see "It took too long to start the service", this usually means a global proxy is preventing access to `localhost`. Disable your proxy and try again. * **macOS version**: Ensure you're running macOS 15 or later. Older versions are not supported. * **Need more help?** Check the [Troubleshooting Guide](/docs/troubleshooting) to view logs and get detailed diagnostics. You can share logs with our community or email support for assistance. After the Installation is completed, Nowledge Mem will be automatically launched. To launch the app manually, click Open on Nowledge Mem in Microsoft Store or click the Start menu and search for "Nowledge Mem". If the app takes too long to start or shows errors: * **Service timeout**: If you see "It took too long to start the service", this usually means a global proxy is preventing access to `localhost`. Disable your proxy and try again. * **Need more help?** Check the [Troubleshooting Guide](/docs/troubleshooting) to view logs and get detailed diagnostics. You can share logs with our community or email support for assistance. Step 3: Download AI Models [#step-3-download-models] After launching Nowledge Mem, you'll need to download the local AI models (approximately 2.4GB total): * **Apple Chip Mac**: On-device LLM is supported. * **Windows**: Remote LLM is required. * **Intel Mac**: Remote LLM is required. * **Linux**: Remote LLM is required. **Check notifications**: You'll see download prompts in the top-right corner of the app **Navigate to models**: Click the notification button, or go to **Settings** → **Models** **Install models**: Click **Install** on the LLM model card LLM Model Install The download will begin automatically, and you can monitor the progress: LLM Model Install Progress Depending on your internet connection, the download may take 5-15 minutes. The models only need to be downloaded once. Step 4: Install the Browser Extension [#step-4-browser-extension] The **Nowledge Mem Exchange** browser extension captures insights from your AI conversations on ChatGPT, Claude, Gemini, and 13+ other platforms. After installing, click the extension icon to open the SidePanel. Configure your LLM provider in **Settings** to enable auto-capture. ChatGPT, Claude, Gemini, Perplexity, DeepSeek, Kimi, Qwen, POE, Manus, Grok, and more. The extension monitors your conversations and saves valuable insights: decisions, discoveries, and conclusions. Routine Q\&A is skipped. See the [Browser Extension guide](/docs/integrations#browser-extension) for details. Next Steps [#next-steps] * **[Getting Started](/docs/getting-started)**: Your first five minutes with the Timeline * **[Integrations](/docs/integrations)**: Connect Claude Code, Cursor, and other AI tools * **[Linux Server Deployment](/docs/server-deployment)**: Run headless on a Linux server # Library (/docs/library) import { Step, Steps } from 'fumadocs-ui/components/steps'; import VideoPlayer from "@/components/ui/video-player"
Drop a 40-page architecture review into the Library. Ask in the Timeline: *"What does the review say about API rate limits?"* The answer cites page 12 of the document and a Redis decision you saved three months ago. Your documents and your memories search together. The Library stores PDFs, Word files, presentations, and Markdown. Content is parsed, split into searchable segments, and indexed. Every document becomes searchable from the Timeline, global search, and connected AI tools via MCP. Supported Formats [#supported-formats] | Format | Extensions | What Happens | | ----------------- | ----------- | ------------------------------------------------------- | | **PDF** | .pdf | Text extracted, split into segments, indexed for search | | **Word** | .docx, .doc | Parsed to text, segmented, indexed | | **Presentations** | .pptx | Slide content extracted and indexed | | **Markdown** | .md | Parsed and indexed directly | Adding Documents [#adding-documents] Drag files into the Timeline input, or use the Library view to import. Documents go through a processing pipeline: 1. **Parsing**: content extracted from the file format 2. **Segmentation**: split into searchable chunks 3. **Indexing**: added to both vector and keyword search indexes Processing status is visible in the Library view. Once indexed, the document's content is searchable from the Timeline, global search, and connected AI tools via MCP. Searching Documents [#searching-documents] Documents are searched alongside memories. A Timeline question like *"What does the Q4 report say about churn?"* searches both your saved memories and any imported documents that match. In the Library view, you can also browse and search documents directly. How It Connects [#how-it-connects] Documents in the Library are sources for your knowledge base, not memories themselves. The distinction: * **Memories** are atomic insights, decisions, or facts you or the system extracted * **Documents** are reference material you imported whole When you distill a document, individual insights can be extracted as memories and connected to the knowledge graph. The document remains in the Library as the source. Next Steps [#next-steps] * **[Getting Started](/docs/getting-started)**: The Timeline and all ways to add knowledge * **[Background Intelligence](/docs/advanced-features)**: How imported knowledge connects to your graph * **[Search & Relevance](/docs/search-relevance)**: How search ranks results across memories and documents # Mem Pro Plan (/docs/mem-pro) import { Card as RawCard, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card" import { Badge } from "@/components/ui/badge" import { Button } from "@/components/ui/button" import { ArrowRight, Download } from "lucide-react" import { Step, Steps } from 'fumadocs-ui/components/steps'; Free vs Pro Plans [#free-vs-pro-plans] Nowledge Mem offers two plans: **Free** and **Pro**. The **Pro** plan unlocks unlimited memories, remote LLM integration (BYOK), and background intelligence features. For detailed feature comparisons, visit the [Pricing Page](https://mem.nowledge.co/pricing). Activating Your Lifetime Pro License [#activating-your-lifetime-pro-license] Visit the pricing page and click the **Lifetime Pro** button to proceed to checkout: Complete the payment using your email address. Your email address will be used to receive the license key and is permanently associated with your Pro plan activation. Payment Page You'll receive an email with your license key. You can retrieve your license key anytime at mem.nowledge.co/licenses using your email address. Open Nowledge Mem and navigate to **Settings** → **Plans**: Free Plan Enter your email address and license key, then click **Activate License**: Activating Pro Once activated, your Pro plan status will be displayed: Activated Pro Manage your activated devices anytime at mem.nowledge.co/licenses. Need help? Contact [hello@nowledge-labs.ai](mailto:hello@nowledge-labs.ai) for assistance with activation or licensing. # Access Mem Anywhere (/docs/remote-access) import { Callout } from 'fumadocs-ui/components/callout'; import { Step, Steps } from 'fumadocs-ui/components/steps'; Nowledge Mem can expose your local API through Cloudflare Tunnel. You get a public URL, and every request is still protected by your Mem API key. Use this when you want one memory center across your laptop, desktop, agent nodes, and browser tools. Access Mem Anywhere Choose Your Connection Type [#choose-your-connection-type] | Type | Best for | What URL you get | | ---------------------- | ---------------------------- | --------------------------------------------------------------------- | | **Quick link** | Fast setup in under a minute | Random `*.trycloudflare.com` URL | | **Cloudflare account** | Daily/long-term usage | Stable URL on your own domain (for example `https://mem.example.com`) | Before You Start [#before-you-start] Open this guide from **Settings → Access Mem Anywhere → Guide**. * Quick link needs no Cloudflare account and no domain. * Cloudflare account mode requires a domain already managed in your Cloudflare account. * If you do not have a domain in Cloudflare yet, use **Quick link** first. * In Cloudflare account mode, the final public URL appears only after you create a hostname route. Path A: Quick Link (No Account) [#path-a-quick-link-no-account] Open remote access in Mem [#open-remote-access-in-mem] Open **Settings → Access Mem Anywhere**. Turn on **Allow devices on same Wi-Fi** if you also want LAN access. Choose Quick link and start [#choose-quick-link-and-start] In **Access from Anywhere**, choose **Quick link**, then click **Start**. Wait for status to become **Live**. Copy URL and API key [#copy-url-and-api-key] In **Ready to connect**, copy: * **URL** * **API key** Use **Rotate** if you want to issue a fresh key. Verify from another machine [#verify-from-another-machine] ```bash export NMEM_API_URL="https://" export NMEM_API_KEY="nmem_..." nmem status ``` Expected: `status ok`. Path B: Cloudflare Account (Stable URL) [#path-b-cloudflare-account-stable-url] You need a domain already in Cloudflare DNS (for example `example.com`) before this path can produce a stable URL. Create a tunnel and copy the token [#create-a-tunnel-and-copy-the-token] In Cloudflare Zero Trust: 1. Open NetworksConnectorsCreate a tunnel. 2. Click Select Cloudflared. Cloudflare Connectors page 3. Name the tunnel and click Save tunnel. Name your tunnel 4. In **Install and run connectors**, copy the token from a command like: ```bash sudo cloudflared service install ... ``` In Mem Desktop, you can paste either: * the raw token, or * the full command line (supported forms: `service install `, `--token `, `--token=`). Mem extracts the token automatically. Copy token from command Create a public hostname route [#create-a-public-hostname-route] In tunnel routing / hostname routes: 1. Create a hostname (for example `mem.example.com`). 2. Bind it to the tunnel you created. This step creates your stable public URL. Hostname routes list Create hostname route Map the hostname to local Mem API [#map-the-hostname-to-local-mem-api] 1. Open NetworksConnectors → your tunnel. Open tunnel details 2. In Published application routes, click Add a published application route. Add app route 3. Map `mem.example.com` to your local Mem server: * Subdomain: `mem` * Domain: your Cloudflare-managed domain * Service Type: `HTTP` * Service URL: `http://127.0.0.1:14242` Do not append `/remote-api`. Map to local Mem API Save and start in Mem [#save-and-start-in-mem] Back in SettingsAccess Mem AnywhereCloudflare account: * Public URL: `https://mem.example.com` * Tunnel token: paste raw token or full `cloudflared` command Then: * Click Save * Click Start * Click Rotate if you want a fresh key * Click Copy to copy URL and API key Verify from another machine [#verify-from-another-machine-1] ```bash export NMEM_API_URL="https://mem.example.com" export NMEM_API_KEY="nmem_..." nmem status ``` Expected: `status ok`. Use It on Other Clients [#use-it-on-other-clients] nmem CLI [#nmem-cli] ```bash export NMEM_API_URL="https://" export NMEM_API_KEY="nmem_..." nmem status nmem m search "project notes" ``` Browser Extension (SidePanel) [#browser-extension-sidepanel] Open any supported AI chat page, then open **Nowledge Mem Exchange** in the browser SidePanel: 1. Click **Settings** 2. In **Access Mem Anywhere**, paste the terminal setup copied from Mem Desktop: ```bash export NMEM_API_URL="https://" export NMEM_API_KEY="nmem_..." ``` 3. Click **Fill URL + key** 4. Click **Save** 5. Click **Test connection** (should show success) You can also type URL + key manually in the same section. OpenClaw Plugin [#openclaw-plugin] Two options — pick whichever fits your setup: **Option A — Plugin config (recommended)** Add `apiUrl` and `apiKey` directly to your plugin entry in `~/.openclaw/openclaw.json`: ```json { "plugins": { "slots": { "memory": "openclaw-nowledge-mem" }, "entries": { "openclaw-nowledge-mem": { "enabled": true, "config": { "autoRecall": true, "autoCapture": false, "maxRecallResults": 5, "apiUrl": "https://", "apiKey": "nmem_..." } } } } } ``` The key is passed to the `nmem` subprocess via environment variable only — it never appears in logs or process arguments. **Option B — Environment variables** Set these in your shell before starting OpenClaw: ```bash export NMEM_API_URL="https://" export NMEM_API_KEY="nmem_..." ``` Both options are equivalent. Use Option A if OpenClaw runs as a service or you want the config self-contained. Use Option B to keep credentials out of config files. MCP / Agent Nodes [#mcp--agent-nodes] MCP clients connect via HTTP — pass your API key in the `Authorization` header. **Cursor** (`~/.cursor/mcp.json` or workspace `.cursor/mcp.json`): ```json { "mcpServers": { "nowledge-mem": { "url": "https:///mcp", "type": "streamableHttp", "headers": { "APP": "Cursor", "Authorization": "Bearer nmem_..." } } } } ``` **Claude Desktop** (`~/Library/Application Support/Claude/claude_desktop_config.json`): ```json { "mcpServers": { "nowledge-mem": { "url": "https:///mcp", "type": "streamableHttp", "headers": { "APP": "Claude", "Authorization": "Bearer nmem_..." } } } } ``` **Codex CLI** (`~/.codex/config.toml`): ```toml [mcp_servers.nowledge-mem] url = "https:///mcp" [mcp_servers.nowledge-mem.http_headers] APP = "Codex" Authorization = "Bearer nmem_..." ``` **Claude Code / CI / other shell-based agents** — environment variables work too: ```bash export NMEM_API_URL="https://" export NMEM_API_KEY="nmem_..." ``` Quick Health Check [#quick-health-check] ```bash curl -H "Authorization: Bearer $NMEM_API_KEY" "$NMEM_API_URL/health" ``` Expected: health JSON response. If wrong key: ```bash curl -H "Authorization: Bearer wrong_key" "$NMEM_API_URL/health" ``` Expected: `401`. If your proxy strips auth headers: ```bash curl "$NMEM_API_URL/health?nmem_api_key=$NMEM_API_KEY" ``` Security and Operations [#security-and-operations] * API key is required for every remote request. * Rotate key anytime in Settings (old key becomes invalid immediately). * After your first successful **Start**, tunnel reconnects automatically on app restart until you click **Stop**. * Browse-Now / Browser Bridge automation endpoints are local-only and are not exposed through Access Anywhere. * Stop tunnel when remote access is not needed. Troubleshooting [#troubleshooting] * **Start timed out**: your network/proxy may block Cloudflare traffic. Retry, or switch to Cloudflare account mode. * **`401 Missing API key`**: proxy likely removed auth headers. Update `nmem`, or use query fallback for manual checks. * **`429 Too many invalid auth attempts`**: wrong key was retried repeatedly. Re-copy key or click **Rotate**. # Search & Relevance (/docs/search-relevance) import { Callout } from 'fumadocs-ui/components/callout'; How Nowledge Mem finds matching memories, ranks them by relevance, and learns from your usage patterns. The Scoring Pipeline [#the-scoring-pipeline] Search combines multiple signals to rank results beyond keyword matching. Nowledge Mem Scoring Pipeline Semantic Scoring [#semantic-scoring] This track finds memories that match what you're looking for: * **Meaning-based search**: Finds memories by semantic similarity, not just exact words. Search for "design patterns" and find memories about "architectural approaches." * **Keyword search**: Catches exact phrases and technical terms using BM25 ranking. * **Label matching**: Surfaces memories with matching tags. * **Graph traversal**: Discovers connected memories through entities and topic communities. Decay & Temporal Scoring [#decay--temporal-scoring] This track adjusts results based on freshness and your usage: * **Recency**: Recently accessed memories score higher. We use exponential decay with about a 30-day half-life. * **Frequency**: Memories you access repeatedly become more durable (logarithmic scaling with diminishing returns). * **Importance floor**: High-importance memories maintain minimum accessibility even when unused. * **Temporal matching**: Boosts memories whose event time matches your query (deep mode only). These tracks combine into a final score that determines result ranking. Memory Decay [#memory-decay] Memories fade over time unless reinforced by use. How It Works [#how-it-works] **Recency**: A memory accessed yesterday scores much higher than one from three months ago. The 30-day half-life means scores roughly halve each month without access. **Frequency**: Your 10th access to a memory matters more than your 100th. This mirrors how human memory works: early repetitions build durability, later ones have diminishing returns. **Importance Floor**: Memories marked as high importance never fully decay. Even untouched, they maintain minimum accessibility. This protects foundational knowledge from fading away. What This Means [#what-this-means] * Active knowledge stays fresh * Old memories don't disappear, they just rank lower when equally relevant * Important knowledge persists regardless of access patterns * The system learns from your behavior automatically Temporal Understanding [#temporal-understanding] Nowledge Mem understands two kinds of time. Event Time vs Record Time [#event-time-vs-record-time] **Event time** is when something actually happened: * "The 2020 product launch" * "Last quarter's decisions" * "Before we migrated" **Record time** is when you saved the memory. You might record a memory today about an event from 2020. This matters for queries like "recent memories about 2020 events": things you saved recently (record time) about events from 2020 (event time). Temporal Intent Detection [#temporal-intent-detection] Temporal intent detection requires deep mode search. In fast mode, temporal references are matched by keywords only. In deep mode, the system interprets temporal references: | Query | Understanding | | ---------------------------- | --------------------------- | | "Decisions from 2023" | Event time: 2023 | | "Recent memories" | Record time: recent | | "Recent memories about 2020" | Event: 2020, Record: recent | | "Before the migration" | Event: before that event | Fuzzy references like "last quarter," "around 2020," or "early this year" are translated into meaningful filters. Date Precision [#date-precision] When you save a memory about "early 2020," the system: 1. Normalizes to a searchable date (2020-01-01) 2. Tracks precision level (year, month, or day) 3. Preserves original meaning for accurate matching This lets "memories from 2020" (year precision) work differently from "memories from January 2020" (month precision). Feedback Loop [#feedback-loop] Your usage patterns continuously improve search relevance. What We Track [#what-we-track] | Signal | What It Captures | | --------------- | -------------------------------------- | | **Appearances** | How often a memory shows in results | | **Clicks** | When you open a memory to view details | | **Dwell time** | How long you spend reading | How It Improves Search [#how-it-improves-search] * High click-through rate indicates the memory is genuinely useful * Long dwell time suggests valuable content * Frequent appearances without clicks may indicate declining relevance No action required. Relevance improves with normal use. Graph-Powered Discovery [#graph-powered-discovery] The knowledge graph enables discovery through entity and topic connections. How Memories Connect [#how-memories-connect] Each memory can link to: * **Entities**: People, concepts, technologies, places mentioned * **Other memories**: Through shared entities or relationships * **Communities**: Graph Analysis detected topic clusters Search Through Connections [#search-through-connections] **Entity-mediated**: Find memories about "database optimization" even when tagged differently, through shared entities like PostgreSQL or indexing. **Community-mediated**: A search about "authentication" might surface memories from your "Security Practices" community. **Graph expansion**: Start from one memory and explore connected knowledge. Search Modes [#search-modes] Two modes, available across all interfaces: Fast Mode [#fast-mode] * Under 100ms typical response * Direct semantic and keyword matching * Entity and community search without language model analysis * Best for quick lookups Deep Mode [#deep-mode] * Full language model analysis * **Temporal intent detection** (e.g., "recently working on; social events in last decade") * Query expansion for better recall * Context-aware strategy weighting * Better for exploratory searches Both modes work in main search, global launcher, and API. Result Transparency [#result-transparency] Every result shows why it ranked where it did. Search Query Details [#search-query-details] After each search, you can view detailed analysis of how your query was interpreted: * Which search strategies were used * Temporal intent detection results (in deep mode) * Query expansion and entity extraction Score Breakdown [#score-breakdown] Hover over any result's score to see a breakdown of how it was calculated: * **Semantic score**: How well the content matches your query * **Decay score**: Freshness based on recency and frequency * **Temporal boost**: Event time relevance (when applicable) * **Graph signals**: Entity and community connections Search Query Details This makes it clear how usage patterns influence ranking and why certain memories appear for specific queries. # Linux Server Deployment (/docs/server-deployment) import { Step, Steps } from 'fumadocs-ui/components/steps'; import { Tab, Tabs } from 'fumadocs-ui/components/tabs'; Nowledge Mem can run as a **headless server** on Linux machines without a GUI. Install the same `.deb` or `.AppImage` package, then manage everything from the command line. Background intelligence features (daily briefings, insight detection, knowledge graph enrichment) require a [Pro license](/pricing). The server itself runs on the free tier with a 20-memory limit. System Requirements [#system-requirements] | Requirement | Specification | | -------------------- | ------------------------------------------------------------------------------- | | **Operating System** | Ubuntu 22.04+, Debian 12+, RHEL 9+, or compatible | | **Architecture** | x86\_64 | | **Memory (RAM)** | 8 GiB minimum (16 GiB recommended) | | **Disk Space** | 10 GiB available | | **Dependencies** | `libgtk-3-0`, `libwebkit2gtk-4.1-0`, `zstd` (installed automatically by `.deb`) | Installation [#installation] ```bash # Install the package sudo dpkg -i nowledge-mem_*.deb # Fix any missing dependencies sudo apt-get install -f ``` The `.deb` post-install script automatically: * Extracts the bundled Python runtime * Creates the `nmem` CLI at `/usr/local/bin/nmem` * Sets up the desktop entry (ignored on headless servers) ```bash # Make executable chmod +x Nowledge_Mem_*.AppImage # Run once to extract the Python runtime ./Nowledge_Mem_*.AppImage --appimage-extract # The nmem CLI is available after first run # Location: ~/.local/bin/nmem ``` Verify the CLI is available: ```bash nmem --version ``` Quick Start [#quick-start] Start the Server [#start-the-server] ```bash nmem serve ``` This runs the server **in the foreground** (press Ctrl+C to stop). The server starts on `0.0.0.0:14242` by default. Customize with flags: ```bash nmem serve --host 127.0.0.1 --port 8080 ``` For production, use `nmem service install` instead. It sets up a **background systemd service** that starts on boot. See [Running as a systemd Service](#running-as-a-systemd-service) below. Activate Your License [#activate-your-license] ```bash nmem license activate nmem license status # Verify activation ``` Download the Embedding Model [#download-the-embedding-model] ```bash nmem models download nmem models status # Verify installation ``` This downloads the embedding model for hybrid search (\~500 MB). Only needed once. Configure the LLM Provider [#configure-the-llm-provider] A remote LLM is required on Linux (no on-device LLM support): ```bash nmem config provider set anthropic \ --api-key sk-ant-xxx \ --model claude-sonnet-4-20250514 nmem config provider test # Verify connection ``` Supported providers: `anthropic`, `openai`, `ollama`, `openrouter`, and OpenAI-compatible endpoints. Enable Background Intelligence [#enable-background-intelligence] ```bash nmem config settings set backgroundIntelligence true nmem config settings set autoDailyBriefing true ``` Verify Everything [#verify-everything] ```bash nmem status ``` Running as a systemd Service [#running-as-a-systemd-service] For production deployments, use `nmem service install` to set up a background systemd service that automatically starts on boot: ```bash # Install, enable, and start sudo nmem service install # Custom host/port sudo nmem service install --host 0.0.0.0 --port 8080 ``` ```bash # No root required nmem service install --user ``` Managing the Service [#managing-the-service] ```bash nmem service status # Show service status nmem service logs -f # Follow service logs nmem service stop # Stop the service nmem service start # Start the service nmem service uninstall # Stop, disable, and remove ``` Add `--user` to any `nmem service` command if you installed a user-level service. serve vs service [#serve-vs-service] | | `nmem serve` | `nmem service install` | | ------------------ | ----------------------------- | -------------------------------------- | | **Runs in** | Foreground (current terminal) | Background (systemd) | | **Stops when** | Ctrl+C or terminal closes | `nmem service stop` or system shutdown | | **Starts on boot** | No | Yes (auto-enabled) | | **Best for** | Testing, development | Production deployments | Remote Access [#remote-access] By default, the server listens on all interfaces (`0.0.0.0`). To access from other machines: ```bash # From a remote machine with nmem-cli installed export NMEM_API_URL=http://your-server:14242 nmem status nmem m search "query" ``` Install the standalone CLI on remote machines: ```bash pip install nmem-cli # or uv pip install nmem-cli ``` The server does not include authentication. For production use, restrict access via firewall rules or bind to `127.0.0.1` and use SSH tunneling or a reverse proxy with authentication. Interactive TUI [#interactive-tui] For an interactive terminal experience, use the TUI: ```bash nmem tui ``` The TUI provides a full settings management interface including license activation, LLM configuration, and knowledge processing toggles. Configuration Reference [#configuration-reference] Environment Variables [#environment-variables] | Variable | Default | Description | | ----------------------- | ------------------------ | --------------------------- | | `NMEM_API_URL` | `http://127.0.0.1:14242` | Server URL for CLI commands | | `NOWLEDGE_DB_PATH` | Auto-detected | Override database location | | `NOWLEDGE_BACKEND_HOST` | `0.0.0.0` | Server bind address | CLI Commands Summary [#cli-commands-summary] | Command | Description | | -------------------------------------------- | -------------------------------------- | | `nmem serve` | Start the server in the foreground | | `nmem service install` | Install and start as a systemd service | | `nmem service status` | Show systemd service status | | `nmem service logs -f` | Follow service logs | | `nmem service stop` / `start` | Stop or start the service | | `nmem service uninstall` | Remove the systemd service | | `nmem status` | Check server health | | `nmem license activate ` | Activate license | | `nmem models download` | Download embedding model | | `nmem config provider set

--api-key ` | Configure LLM provider | | `nmem config provider test` | Test LLM connection | | `nmem config settings` | Show processing settings | | `nmem config settings set ` | Update a setting | | `nmem tui` | Interactive terminal UI | Next Steps [#next-steps] * **[CLI Reference](/docs/cli)** - Complete CLI documentation * **[API Reference](/docs/api)** - REST API endpoints * **[Integrations](/docs/integrations)** - Connect with AI tools # Troubleshooting (/docs/troubleshooting) import { Button } from "@/components/ui/button" import { Loader2, Trash2, AlertTriangle, Lightbulb, MessageSquare } from "lucide-react" import { Card, CardContent } from "@/components/ui/card" import { formatSize } from "@/lib/utils" import { Github } from "@lobehub/icons" import { Tabs, Tab, TabsList, TabTrigger, TabContent } from "fumadocs-ui/components/tabs" export const ClearCacheButton = () => ( ) Viewing Logs [#viewing-logs] On macOS, the system log file is located at `~/Library/Logs/Nowledge\ Graph/app.log`. You can view it by running this command in your terminal: ```bash open -a Console ~/Library/Logs/Nowledge\ Graph/app.log ``` On Windows, the system log file is located on two possible locations based on the installation method: * `%LOCALAPPDATA%\Packages\NowledgeLabsLLC.NowledgeMem_1070t6ne485wp\logs\app.log` (installed from Microsoft Store) * `%LOCALAPPDATA%\NowledgeGraph\logs\app.log` (installed from package file downloaded from Nowledge Mem website) You can view it by pasting this into address bar of File Explorer: ```shell %LOCALAPPDATA%\Packages\NowledgeLabsLLC.NowledgeMem_1070t6ne485wp\logs\app.log ``` or this: ```shell %LOCALAPPDATA%\NowledgeGraph\logs\app.log ``` App Takes Too Long to Start [#app-takes-too-long-to-start] **Symptom:** The app hangs or shows a timeout error during startup. **Solution:** Global proxies or VPN software can prevent the app from accessing `http://127.0.0.1:14242` directly. Configure your proxy or VPN tool to bypass localhost addresses. Add the following to your bypass/exclusion rules: ``` 127.0.0.1, localhost, ::1 ``` This allows you to keep your proxy/VPN enabled while ensuring Nowledge Mem can communicate with its local server. After updating the bypass rules, restart Nowledge Mem. AI Now Session Fails to Start [#ai-now-session-fails-to-start] **Symptom:** Clicking **New Task** or resuming a paused task fails, and AI Now cannot open a session. **What to do first:** Check the startup diagnostics card shown in AI Now. When startup fails, AI Now now shows a diagnostics card with: * failure stage (`spawn`, `initialize`, or `new_session`) * platform and process exit code * recent `stderr` output from the startup script * a copy button for sharing diagnostics Click **Details** to expand technical fields, then click **Copy diagnostics** for support or issue reports. **Common fixes (especially on Windows):** 1. Verify your installation is complete (embedded Python and startup scripts are present). 2. Restart Nowledge Mem after plugin or model configuration changes. 3. Temporarily disable antivirus/quarantine rules that may block bundled Python or PowerShell startup. 4. If a plugin is involved, reconnect expired OAuth plugins in **AI Now → Plugins** and retry. If it still fails, include copied diagnostics plus `app.log` when reporting the issue. Corrupted Model Cache [#corrupted-model-cache] **Symptom:** Search, memory distillation, or knowledge extraction features stop working unexpectedly. **Solution:** Clear the model cache and re-download the models. Navigate to SettingsModels, and click: After clearing the cache, re-download the required models. CLI Not Found [#cli-not-found] **Symptom:** Running `nmem` in terminal returns "command not found". **Solutions by platform:** * **macOS**: Install the CLI from **Settings → Preferences → Developer Tools** * **Windows**: Open a **new** terminal window after app installation (the PATH update requires a fresh session) * **Linux**: The CLI is included with deb/rpm packages. If installed manually, ensure `/usr/local/bin` is in your PATH **Quick check:** Run `nmem status` to verify the CLI can connect to Nowledge Mem. Remote Access Returns 429 [#remote-access-returns-429] **Symptom:** `nmem status` or `curl` returns `429 Too many invalid auth attempts`. **Solution:** The client retried with an invalid API key too many times. * Re-copy URL + key from **Settings → Access Mem Anywhere** * Ensure `NMEM_API_KEY` is the exact value (no extra spaces/quotes) * If unsure, click **Rotate** to issue a new key Full setup and validation steps: [Access Mem Anywhere](/docs/remote-access). Remote Access Returns 401 Missing API key [#remote-access-returns-401-missing-api-key] **Symptom:** Tunnel URL is reachable, but `nmem status` or `curl` returns `401 Missing API key`. **Cause:** Some network proxies remove auth headers. **Fix:** * Update to latest `nmem` (it retries with proxy-safe fallback automatically) * Re-copy URL + key from **Settings → Access Mem Anywhere** * For manual `curl`, verify with: `curl "$NMEM_API_URL/health?nmem_api_key=$NMEM_API_KEY"` Report Issue [#report-issue]

# Try These (/docs/try-these) import { Callout } from 'fumadocs-ui/components/callout'; Your Timeline input handles everything: questions, captures, URLs, files, scheduling. Type naturally and AI figures out the rest. Here are the queries that show what the system can really do. These queries get more powerful as your knowledge grows. After a week of regular use, the results will surprise you. The Queries [#the-queries] 1. Show my Working Memory briefing [#1-show-my-working-memory-briefing] Reads your current focus surface at `~/ai-now/memory.md`. What topics are active, what needs attention, recent activity summary. Connected AI tools (Claude Code, Cursor) read this automatically. 2. Which of my ideas have evolved the most? [#2-which-of-my-ideas-have-evolved-the-most] Finds the longest EVOLVES chains, ideas that went through multiple revisions. Tells the story chronologically: "In January you decided on PostgreSQL. By March, you were considering a hybrid approach. Your latest note confirms the dual-database migration." 3. What wisdom has crystallized from my notes? [#3-what-wisdom-has-crystallized-from-my-notes] Shows synthesized "crystals", reference articles the system distilled from multiple related memories overnight. These are the insights you couldn't get from any single note. 4. Summarize my recent coding conversations [#4-summarize-my-recent-coding-conversations] If you use Claude Code, Cursor, or Codex, your sessions auto-sync. This lists and summarizes your latest coding sessions: what was discussed, what was built, what decisions were made. 5. Just decided to use PostgreSQL for the main database [#5-just-decided-to-use-postgresql-for-the-main-database] Knowledge capture. The system saves it as a memory, searches for related decisions, and mentions connections: "This relates to your earlier note about database scaling." Just type naturally, the AI classifies what you share and stores it. 6. Save https://example.com/interesting-article [#6-save-httpsexamplecominteresting-article] Paste a URL and the system fetches, parses, and indexes the content. AI reads the page and stores a substantive summary as a memory. The URL and its content become searchable. Add a note before the URL and AI captures both. 7. Tonight, run knowledge graph extraction on my recent memories [#7-tonight-run-knowledge-graph-extraction-on-my-recent-memories] Schedule a background Knowledge Agent task. The agent fires at the specified time with full tool access: it can analyze memories, detect contradictions, create EVOLVES links, or produce crystals. Natural language timing: "in 2 hours", "tomorrow morning", "next week". Min 5 minutes, max 30 days. 8. Search my documents for [topic] [#8-search-my-documents-for-topic] Full-text search across all source documents in your Library. Drop files (PDF, Word, markdown) onto the Timeline input or add them through the Library. They get parsed, chunked, and indexed for semantic search. 9. What are my main knowledge themes? [#9-what-are-my-main-knowledge-themes] **Note**: This requires a week of regular use and background processing. Community detection clusters your entities into topic areas with AI summaries. The system runs overnight analysis to group related concepts. You'll see themes you never consciously tracked: a "developer experience" cluster you didn't know existed, or a "data architecture" theme threading through months of notes. The Compound Effect [#the-compound-effect] These queries get more powerful over time: * **Week 1**: Basic search works. Communities are small or empty. * **Month 1**: Evolution chains appear. Crystals start forming. Themes emerge. * **Month 3**: Cross-domain connections surprise you. Daily briefings are genuinely useful. * **Month 6**: The system knows your expertise better than you can articulate it. Next Steps [#next-steps] * [Getting Started](/docs/getting-started): Set up in five minutes * [See Your Expertise](/docs/use-cases/expertise-graph): Explore the knowledge graph visually * [Background Intelligence](/docs/advanced-features): How the system learns overnight # Using Nowledge Mem (/docs/usage) import VideoPlayer from "@/components/ui/video-player" import { Step, Steps } from 'fumadocs-ui/components/steps'; The Timeline [#the-timeline] The Timeline is your home screen. Everything lives here: what you capture, what you ask, what the system discovers on its own. Nowledge Mem Timeline Type into the input at the top. AI figures out what you meant and acts. A thought becomes a memory. A question gets answered from your knowledge. A URL gets fetched and indexed. A file gets parsed. What You'll See [#what-youll-see] | Item | What it is | | ------------------ | ------------------------------------------------------------- | | **Capture** | A memory you saved, with auto-generated title and tags | | **Question** | Your question and the AI's answer, drawn from your knowledge | | **URL Capture** | A web page fetched, parsed, and stored | | **Insight** | A connection the system discovered between your memories | | **Crystal** | A synthesized summary of multiple related memories | | **Flag** | A contradiction, stale info, or claim that needs verification | | **Working Memory** | Your daily morning briefing | Your AI Tools [#your-ai-tools] Connect any AI tool to your knowledge. Claude Code, Cursor, Codex, OpenCode, Alma, DeepChat, LobeHub, or whatever you switch to next. **Without Mem:** *"Help me implement caching for the API."* Your agent asks about your stack, your infrastructure, your preferences. You explain everything from scratch. **With Mem:** *"Help me implement caching for the API."* Your agent searches your knowledge, finds your Redis decision from last month and your API rate limiting patterns, and writes code that fits your architecture. No setup questions. This happens without prompting. The tool recognizes it has access to your knowledge and uses it when relevant.
Save an insight in Claude Code today. Cursor finds it tomorrow when it encounters the same topic. No copying, no exporting. You can also query directly: *"What did I decide about database migrations last month?"* Your agent searches your knowledge to answer. See [Integrations](/docs/integrations) for setup instructions. Search [#search] In the App [#in-the-app] Open memory search with Cmd + K (macOS). Search understands meaning, not just keywords. Searching "design patterns" finds memories about "architectural approaches." Memory Search Three search modes work together: * **Semantic** finds memories by meaning * **Keyword** does exact match for specific terms * **Graph** discovers memories through entity connections and topic clusters From Anywhere [#from-anywhere] Press Cmd + Shift + K from any application to search without opening Nowledge Mem. Copy results directly where you need them. The [Raycast extension](/docs/integrations#raycast) brings the same search into your launcher. Memory Search Launcher
AI Now [#ai-now] AI Now is a personal AI agent running on your machine. It has your full knowledge base, your connected notes, and the web. Purpose-built capabilities — not just chat: * **Deep research** that searches your memories and the web in parallel, then synthesizes * **File analysis** that understands your spreadsheets in context — "what changed from last quarter" works because it knows last quarter * **Presentations** with live preview and PowerPoint export * **Plugins** for Obsidian, Notion, Apple Notes, and any MCP service When you ask about caching, it already knows your Redis decision from last month. When you analyze data, it connects the numbers to your goals and history. Every capability draws from what you know. AI Now requires a remote LLM. See [AI Now](/docs/ai-now) for the full guide. Command Line [#command-line] The `nmem` CLI gives full access from any terminal: ```bash # Search your memories nmem m search "authentication patterns" # Add a memory nmem m add "We chose JWT with 24h expiry for the auth service" # JSON output for scripting nmem --json m search "API design" | jq '.memories[0].content' ``` See the [CLI reference](/docs/cli) for the complete command set. Remote LLMs [#remote-llms] By default, everything runs locally. No internet required. As your knowledge base grows, a remote LLM gives you stronger processing. Remote LLM configuration requires a [Pro license](/docs/mem-pro). **What it unlocks:** * **Background Intelligence**: automatic connections, crystals, insights, and daily briefings * Faster knowledge graph extraction * More nuanced semantic understanding * AI Now agent capabilities **Privacy:** your data is sent only to the LLM provider you choose. Never to Nowledge Mem servers. Switch back to local-only at any time. Go to **Settings > Remote LLM** Toggle **Remote** to enable Select your LLM provider and enter your API key Test the connection, select a model, and save Remote LLM Next Steps [#next-steps] * **[AI Now](/docs/ai-now)**: Deep research and analysis powered by your knowledge * **[Background Intelligence](/docs/advanced-features)**: Knowledge graph, insights, crystals, working memory * **[Integrations](/docs/integrations)**: Connect your AI tools # Nowledge Mem API (/docs/api) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Integrations (/docs/integrations) import VideoPlayer from "@/components/ui/video-player" import { McpServerView } from "@/components/docs/mcp" import { BrowserExtensionGuide } from "@/components/docs/browser-extension-guide" import { FileImportGuide } from "@/components/docs/file-import" import { InlineTOC } from 'fumadocs-ui/components/inline-toc'; import { Step, Steps } from 'fumadocs-ui/components/steps'; import { Button } from '@/components/ui/button'; import { Download } from 'lucide-react'; import { CodeXml } from 'lucide-react'; import { Files } from 'lucide-react'; import { Braces } from 'lucide-react'; import { FileText } from 'lucide-react'; Nowledge Mem connects to whatever tools you use today, and whatever you'll switch to tomorrow. Your knowledge stays in one place; the tools come and go. Quick Start (One Command) [#quick-start-one-command] For Claude Code, Cursor, Codex, OpenCode, OpenClaw, Alma, and 20+ agents: ```bash npx skills add nowledge-co/community/nowledge-mem-npx-skills ``` This installs four skills: **search-memory**, **read-working-memory**, **save-thread**, and **distill-memory**. After setup, your agent reads context at session start, searches knowledge when relevant, and saves findings as it works. | I want to... | Use | | ----------------------------------------------------------------------- | -------------------------------------------------------------------------------- | | Use Nowledge Mem with **Claude Code, Codex, Cursor, OpenCode, or Alma** | npx skills (above) or [tool-specific setup](#claude-code) / [Alma plugin](#alma) | | Use Nowledge Mem with **OpenClaw** | [OpenClaw in 5 Minutes](/docs/integrations/openclaw) | | Search memories from **Raycast** | [Raycast extension](#raycast) | | Capture memories from **ChatGPT, Claude, Gemini**, and 13+ AI platforms | [Browser extension](#browser-extension) (auto or manual) | | Access Mem from **any machine over internet** | [Access Mem Anywhere guide](/docs/remote-access) | | Build **custom integrations** | [REST API](#api-integration) or [CLI](#command-line-interface-cli) | Model Context Protocol (MCP) [#model-context-protocol-mcp] MCP is the protocol AI agents use to interact with Nowledge Mem. The npx skills above use MCP under the hood. For tools that need manual configuration, see below. Two Integration Paths [#two-integration-paths] | Path | Apps | Setup | Autonomous Behavior | | -------------------- | ------------------------------------------------------------------------------------------------ | ---------------------------------- | ------------------------------------ | | **Skill-Compatible** | Claude Code, Codex, Cursor, OpenCode, [OpenClaw](https://openclaw.ai), [Alma](https://alma.now/) | `npx skills add` or install plugin | Built-in triggers, no prompts needed | | **MCP-Only** | Claude Desktop, Cursor, ChatWise, etc. | Configure MCP + system prompts | Requires system prompts for autonomy | **Skill-compatible apps** (Claude Code, Codex, Cursor, OpenCode, OpenClaw, Alma): The npx skills command above is the fastest. Or jump to [Claude Code](#claude-code) / [Codex CLI](#codex-cli) / [Alma](#alma) for tool-specific setup. **MCP-only apps**: Continue below to configure MCP and add system prompts for autonomous behavior. MCP Capabilities [#mcp-capabilities] * **Search memories**: `memory_search` * **Read Working Memory**: `read_working_memory` * **Add memories**: `memory_add` * **Update memories**: `memory_update` * **List memory labels**: `list_memory_labels` * **Save/Import threads**: `thread_persist` * **Prompts**: `sum` (summarize to memory), `save` (checkpoint thread) MCP Server Configuration [#mcp-server-configuration] System Prompts for Autonomous Behavior [#system-prompts-for-autonomous-behavior] For MCP-only apps to act autonomously, add these instructions to your agent's system prompt or CLAUDE.md/AGENTS.md: ```markdown ## Nowledge Mem Integration You have access to Nowledge Mem for knowledge management. Use these tools proactively: **At Session Start (`read_working_memory`):** - Read ~/ai-now/memory.md for today's briefing - Understand user's active focus areas, priorities, and unresolved flags - Reference this context naturally when it connects to the current task **When to Search (`memory_search`):** - Current topic connects to prior work - Problem resembles past solved issue - User asks about previous decisions ("why did we choose X?") - Complex debugging that may match past root causes **When to Save Memories (`memory_add`):** - After solving complex problems or debugging - When important decisions are made with rationale - After discovering key insights ("aha" moments) - When documenting procedures or workflows - Skip: routine fixes, work in progress, generic Q&A **Memory Categories (use as labels):** - insight: Key learnings, realizations - decision: Choices with rationale and trade-offs - fact: Important information, data points - procedure: How-to knowledge, workflows - experience: Events, conversations, outcomes **Memory Quality:** - Atomic and actionable (not vague) - Standalone context (readable without conversation) - Focus on "what was learned" not "what was discussed" **Importance Scale (0.1-1.0):** - 0.8-1.0: Critical decisions, breakthroughs - 0.5-0.7: Useful insights, standard decisions - 0.1-0.4: Background info, minor details **When to Save Threads (`thread_persist`):** - Only when user explicitly requests ("save this session") - Never auto-save without asking ``` This enables autonomous memory operations in Claude Desktop, Cursor, ChatWise, and other MCP-only apps. Browser Extension [#browser-extension] Nowledge Mem Exchange captures memories from AI conversations on ChatGPT, Claude, Gemini, and 13+ platforms. It runs in a native Chrome SidePanel alongside your conversations.
Smart Distill [#smart-distill] Auto-capture evaluates each conversation turn and saves what matters. Configure your preferred LLM provider and let the extension work autonomously. Exchange: Proactive capture settings Exchange: Smart Distill trigger Exchange: Capture result Three Ways to Capture [#three-ways-to-capture] | Mode | How it works | When to use | | ------------------ | -------------------------------------------------------------------- | -------------------------------------------------------------------- | | **Auto-Capture** | Monitors your conversations and autonomously saves valuable insights | Set it and forget it. The extension decides what's worth remembering | | **Manual Distill** | You trigger capture on a conversation you want to save | When you know a conversation contains something important | | **Thread Backup** | Imports the full conversation as a thread, with incremental dedup | Archive entire conversations for later distillation in the app | Auto-Capture [#auto-capture] When enabled, the extension monitors conversations and applies strict criteria to decide what's worth saving: * **Refined conclusions**: decisions, plans, finalized approaches * **Important discoveries**: breakthroughs, key findings * **Knowledge explorations**: deep dives, research synthesis Routine Q\&A and generic exchanges are skipped. The extension checks for duplicates before saving and can update existing memories instead of creating new ones. Auto-capture requires a configured LLM provider. Open the SidePanel, go to **Settings**, and add your API key. Supported providers: OpenAI, Anthropic, Google, xAI, OpenRouter, Ollama, and OpenAI-compatible endpoints. Thread Backup [#thread-backup] Imports the full conversation as a thread. Subsequent backups only capture new messages (incremental sync). Once imported, trigger Memory Distillation from the app to extract individual memories. For local coding assistants, Nowledge Mem also supports **AI Conversation Discovery (auto-sync)** with incremental updates for **Claude Code, Cursor, Codex, and OpenCode**. Supported Platforms [#supported-platforms] The extension works with all major AI chat services: | Platform | Sites | | -------------- | -------------------------- | | **ChatGPT** | openai.com, chatgpt.com | | **Claude** | claude.ai | | **Gemini** | gemini.google.com | | **Perplexity** | perplexity.ai | | **DeepSeek** | chat.deepseek.com | | **Kimi** | kimi.moonshot.cn | | **Qwen** | qwen.ai, tongyi.aliyun.com | | **POE** | poe.com | | **Manus** | manus.im | | **Grok** | grok.com, grok.x.ai, x.ai | | **Open WebUI** | localhost, private IPs | | **ChatGLM** | chatglm.cn | | **MiniMax** | agent.minimaxi.com | Pro users with a configured LLM can auto-generate handlers for any AI chat site. Navigate to the site, open the SidePanel, and click **Generate handler**. The extension analyzes the page structure and creates a custom handler automatically. Connect Extension to Access Mem Anywhere [#connect-extension-to-access-mem-anywhere] If your Mem API is exposed through **Settings → Access Mem Anywhere** in the desktop app: 1. Open any supported AI chat page, then open the extension SidePanel 2. Click **Settings** 3. In **Access Mem Anywhere**, paste: * `export NMEM_API_URL="https://"` * `export NMEM_API_KEY="nmem_..."` 4. Click **Fill URL + key** 5. Click **Save**, then **Test connection** Full guide (Quick link and Cloudflare account modes): [Access Mem Anywhere](/docs/remote-access). Download [#download] The extension also supports downloading any conversation thread as a `.md` file for archiving or sharing. } title="MD Format Reference"> Example conversation file in MD format Thread File Import [#thread-file-import] Import conversations from your favorite AI tools by uploading exported conversation files directly into Nowledge Mem. AI Conversation Discovery (Auto-Sync) [#ai-conversation-discovery-auto-sync] Find and import local coding-assistant conversations directly from the app: | Client | Sync Mode | Where | | --------------- | --------------------------------- | ---------------------------------------- | | **Claude Code** | Auto-discovery + incremental sync | Threads → Import → Find AI Conversations | | **Cursor** | Auto-discovery + incremental sync | Threads → Import → Find AI Conversations | | **Codex** | Auto-discovery + incremental sync | Threads → Import → Find AI Conversations | | **OpenCode** | Auto-discovery + incremental sync | Threads → Import → Find AI Conversations | Bulk Import (Multiple Threads at Once) [#bulk-import-multiple-threads-at-once] For users with large conversation histories, Nowledge Mem supports bulk importing all your conversations from a single export file: | Source | File Format | How to Export | | ------------ | ---------------------------- | -------------------------------------- | | **ChatGPT** | `chat.html` | Settings → Data controls → Export data | | **ChatWise** | `.zip` (contains JSON files) | Export all chats from ChatWise app | Single Thread Import [#single-thread-import] For importing individual conversations: | Source | File Format | Notes | | ------------ | ----------- | --------------------------------------- | | **Cursor** | `.md` | Export conversation from Cursor | | **ChatWise** | `.html` | Single chat HTML export | | **Generic** | `.md` | Any markdown with user/assistant format | For developers building custom import tools: * **Thread API**: create threads programmatically from your tool ([API reference](https://mem.nowledge.co/docs/api/threads/post)) * **Markdown format**: convert conversations to an importable `.md` file ([format reference](https://github.com/nowledge-co/nowledge-mem/blob/main/refs/nowledge_mem_exchange/example_conversation_file.md)) } title="Create Thread API"> API Docs for creating a thread in Nowledge Mem } title="MD Format Reference"> Example conversation file in MD format Tight Integrations [#tight-integrations] **DeepChat** and **LobeHub** include Nowledge Mem as a built-in integration. Claude Desktop [#claude-desktop] One-click extension for Claude Desktop. Download Extension Install Extension Ensure Python 3.13 is installed on your system. Open **Terminal.app** and run the following commands: ```bash which brew || /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" python3.13 --version || /opt/homebrew/bin/brew install python@3.13 ``` 1. Double-click the downloaded `claude-dxt.mcpb` file from your browser's download area 2. Click the **Install** button in the installation dialog 3. Restart the Claude Desktop App Install Extension You can now ask Claude to save insights to Nowledge Mem, update existing memories, or search your knowledge base anytime during conversations. Use Mem in Claude Desktop > Note, if you failed to enable Mem in Claude Desktop, check logs via `tail -n 20 -F ~/Library/Logs/Claude/mcp*.log` and share with us. Claude Code [#claude-code] Claude Code supports skills - install the plugin for built-in autonomous behavior. No system prompts or MCP configuration needed. The CLI-based plugin includes skills that: * Search your knowledge base when relevant context exists * Suggest distillation at breakthrough moments * Save sessions on explicit request Install the Claude Code plugin Install the Nowledge Mem plugin for autonomous search, save, and checkpoint behavior. The plugin uses the `nmem` CLI. See: [Claude Code plugins](https://docs.claude.com/en/docs/claude-code/plugins). ```bash # Add the Nowledge community marketplace claude plugin marketplace add nowledge-co/community # Install the Nowledge Mem plugin claude plugin install nowledge-mem@nowledge-community ``` **Prerequisites**: The plugin requires `nmem` CLI. Install it with: ```bash # Option 1 (Recommended): Use uvx (no installation needed) curl -LsSf https://astral.sh/uv/install.sh | sh uvx --from nmem-cli nmem --version # Option 2: Install with pip pip install nmem-cli ``` **Note**: On Windows/Linux with Nowledge Mem Desktop app, `nmem` is bundled. On macOS or remote servers, use `uvx` or install manually. **Update Plugin**: To get the latest version: ```bash claude plugin marketplace update claude plugin update nowledge-mem@nowledge-community # Restart Claude Code to apply changes ``` Usage Three ways to use Nowledge Mem inside a Claude Code chat: **Slash Commands (Quick Access)** Type these commands directly: * `/save` - Save current session to Nowledge Mem * `/sum` - Distill insights from this conversation * `/search ` - Search your knowledge base **Natural Language** * Say "Save this session" or "Checkpoint this conversation" * Claude will automatically run `nmem t save --from claude-code` * Say "Distill this conversation" or "Save the key insights" * Claude will analyze and create structured memories using `nmem m add` **Autonomous (via Skills)** The plugin includes four skills that work automatically: * **Read Working Memory**: loads your daily briefing at session start and after context compaction * **Search Memory**: searches when you reference past work * **Distill Memory**: suggests distillation at breakthrough moments * **Save Thread**: saves sessions on explicit request **Lifecycle Hooks** The plugin includes [Claude Code hooks](https://code.claude.com/docs/en/hooks) for automatic lifecycle management: | Event | Trigger | Action | | ------------------------ | ------------------------ | ------------------------------------------------------------------- | | `SessionStart` (startup) | New session begins | Injects Working Memory briefing | | `SessionStart` (compact) | After context compaction | Re-injects Working Memory and prompts Claude to checkpoint progress | These hooks run automatically. Working Memory context is injected into Claude's context at startup and after compaction, so Claude always knows your current priorities. After compaction, Claude is prompted to save important findings via `nmem m add` before continuing. **Autonomous Knowledge Capture** For proactive memory management, see the complete example: **[AGENTS.md](https://github.com/nowledge-co/community/blob/main/examples/AGENTS.md)**: a memory-keeper agent using the [agents.md standard](https://agents.md/) that works with any AI coding agent. Codex CLI [#codex-cli] Codex supports custom prompts - install them for built-in slash commands. No MCP configuration needed. Codex integration via `nmem` CLI and custom prompts. **Install nmem CLI** Install the CLI: ```bash # Option 1 (Recommended): Use uvx (no installation needed) curl -LsSf https://astral.sh/uv/install.sh | sh uvx --from nmem-cli nmem --version # Option 2: Install with pip pip install nmem-cli ``` **Note**: On Windows/Linux with Nowledge Mem Desktop app, `nmem` is bundled. On macOS or remote servers, use `uvx` or install manually. **Install Custom Prompts** Install custom prompts for slash commands: > Fresh install: ```bash curl -fsSL https://raw.githubusercontent.com/nowledge-co/community/main/nowledge-mem-codex-prompts/install.sh | bash ``` > Update install: ```bash curl -fsSL https://raw.githubusercontent.com/nowledge-co/community/main/nowledge-mem-codex-prompts/install.sh -o /tmp/install.sh && bash /tmp/install.sh --force && rm /tmp/install.sh ``` Usage inside a Codex chat: **Slash Commands** Type these commands directly: * `/prompts:read_working_memory` - Load your daily Working Memory briefing for context * `/prompts:save_session` - Save current session using `nmem t save --from codex` * `/prompts:distill` - Distill insights using `nmem m add` Or type `/` and search for "memory", "save", or "distill" to find them. **Troubleshooting** * **"Command not found: nmem"** → Use `uvx --from nmem-cli nmem --version` or install with `pip install nmem-cli` * **"Command not found: uvx"** → Install uv with `curl -LsSf https://astral.sh/uv/install.sh | sh` * **Sessions not listing** → Ensure you're in the correct project directory DeepChat [#deepchat] DeepChat has built-in Nowledge Mem support. Enable MCP in DeepChat Toggle on the switch under Settings > MCP Settings Enable Nowledge Mem Toggle on the nowledge-mem switch under Custom Servers DeepChat Toggle Highlight LobeHub [#lobehub] LobeHub (formerly LobeChat) has built-in Nowledge Mem support. One-Click Installation Install Nowledge Mem directly in LobeHub using the one-click installation feature: Click the Install button to install Nowledge Mem LobeHub plugin. LobeHub Installation Demo OpenClaw [#openclaw] [OpenClaw](https://openclaw.ai) plugin for persistent agent memory. Source: [community/nowledge-mem-openclaw-plugin](https://github.com/nowledge-co/community/tree/main/nowledge-mem-openclaw-plugin) Use the dedicated setup guide: **[OpenClaw in 5 Minutes](/docs/integrations/openclaw)** Includes: * correct slot-based config (`plugins.slots.memory = "openclaw-nowledge-mem"`) * install and verification commands * optional lifecycle capture setup * local-first regression validation workflow Alma [#alma] [Alma](https://alma.now/) plugin for persistent memory workflows. Source: [community/nowledge-mem-alma-plugin](https://github.com/nowledge-co/community/tree/main/nowledge-mem-alma-plugin) Clone the plugin, install dependencies, and copy it into Alma's local plugin directory ```bash git clone https://github.com/nowledge-co/community.git cd community/nowledge-mem-alma-plugin npm install mkdir -p ~/.config/alma/plugins/nowledge-mem cp -R . ~/.config/alma/plugins/nowledge-mem ``` Restart Alma **What the plugin provides:** * **Tool suite**: memory query/search/store/show/update/delete + thread search/show/create/delete + Working Memory * **Command palette actions**: status, search, save memory, read Working Memory, save current thread * **Auto-recall hook**: injects Working Memory + relevant memories on first outgoing message in each thread * **Optional auto-capture hook**: saves current thread on app quit * **Local-first runtime**: uses `nmem` CLI (fallback `uvx --from nmem-cli nmem`) Raycast [#raycast] [Raycast](https://raycast.com) extension with four commands: Source: [community/nowledge-mem-raycast](https://github.com/nowledge-co/community/tree/main/nowledge-mem-raycast) | Command | What it does | | ----------------------- | ----------------------------------------------------------------------------- | | **Search Memories** | Semantic search with relevance scores, copy content or title from any result | | **Add Memory** | Save a memory with title, content, and importance | | **Working Memory** | View your daily briefing | | **Edit Working Memory** | Edit `~/ai-now/memory.md` inline, changes respected by all connected AI tools | **Raycast Store** (coming soon): Once [our Store submission](https://github.com/raycast/extensions/pull/25451) is merged, search "Nowledge Mem" in the Raycast Store to install. **Install from source** (available now): ```bash git clone https://github.com/nowledge-co/community.git cd community/nowledge-mem-raycast npm install && npm run dev ``` Requires Nowledge Mem running locally. The extension calls the HTTP API at `localhost:14242` for search and memory creation, and reads `~/ai-now/memory.md` for Working Memory. LLM-Friendly Documentation [#llm-friendly-documentation] Every page on this docs site is available as clean Markdown for AI agents and LLMs. Request any docs URL with the `Accept: text/markdown` header and you get Markdown instead of HTML: ```bash # Fetch any docs page as Markdown curl -H "Accept: text/markdown" https://mem.nowledge.co/docs curl -H "Accept: text/markdown" https://mem.nowledge.co/docs/getting-started curl -H "Accept: text/markdown" https://mem.nowledge.co/docs/integrations ``` Dedicated endpoints are also available: | Endpoint | What it returns | | --------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------- | | [`/llms-full.txt`](https://mem.nowledge.co/llms-full.txt) | All documentation pages concatenated into one file | | `/llms.mdx/docs/` | A single page as Markdown (e.g. [`/llms.mdx/docs/getting-started`](https://mem.nowledge.co/llms.mdx/docs/getting-started)) | No authentication required. API Integration [#api-integration] RESTful API for programmatic access. } href="/docs/api" title="API Reference"> Nowledge Mem RESTful API Documentation. } title="OpenAPI Spec"> openapi.json Command Line Interface (CLI) [#command-line-interface-cli] The `nmem` CLI provides terminal access to your knowledge base. Installation [#installation] | Platform | Installation | | ----------- | ------------------------------------------------------ | | **macOS** | Settings → Preferences → Developer Tools → Install CLI | | **Windows** | Automatically installed with the app | | **Linux** | Included with deb/rpm packages | Quick Start [#quick-start] ```bash # Check connection nmem status # Search memories nmem m search "project notes" # List recent memories nmem m # Create a memory nmem m add "Important insight" --title "Project Learnings" # Search threads nmem t search "architecture" # Save Claude Code/Codex sessions via CLI nmem t save --from claude-code nmem t save --from codex -s "Summary of what was accomplished" # Create a thread from content nmem t create -t "Session Notes" -c "Key discussion points..." # Create a thread from file nmem t create -t "Meeting Notes" -f notes.md ``` AI Agent Integration [#ai-agent-integration] The CLI is designed for AI agent workflows with JSON output: ```bash # Get JSON output for parsing nmem --json m search "API design" # Chain commands ID=$(nmem --json m add "Note" | jq -r '.id') nmem --json m update "$ID" --importance 0.9 # Multi-message thread creation nmem t create -t "Session" -m '[{"role":"user","content":"Q"},{"role":"assistant","content":"A"}]' ``` Command Reference [#command-reference] | Command | Alias | Description | | --------------- | -------- | ----------------------- | | `nmem status` | | Check server connection | | `nmem stats` | | Database statistics | | `nmem memories` | `nmem m` | Memory operations | | `nmem threads` | `nmem t` | Thread operations | For complete CLI documentation, run `nmem --help` or see the CLI Reference on GitHub. Share your integration on GitHub or Discord. Next Steps [#next-steps] * **[Troubleshooting](/docs/troubleshooting)**: Common issues and solutions * **[Background Intelligence](/docs/advanced-features)**: Knowledge graph, insights, and daily briefings # OpenClaw × Nowledge Mem (/docs/integrations/openclaw) import { Step, Steps } from 'fumadocs-ui/components/steps'; Once configured, your OpenClaw agent remembers what you said in the last session, the decision you made last week, and the knowledge you wrote into a document three months ago. Before You Start [#before-you-start] You need: * Nowledge Mem running locally ([installation](/docs/installation)) * OpenClaw installed ([OpenClaw getting started](https://docs.openclaw.ai/start/openclaw)) * `nmem` CLI on your PATH ```bash nmem status # should show Nowledge Mem is running openclaw --version ``` Setup [#setup] Install the plugin ```bash openclaw plugins install @nowledge/openclaw-nowledge-mem ``` Enable the plugin in OpenClaw config Open `~/.openclaw/openclaw.json` and add: ```json { "plugins": { "slots": { "memory": "openclaw-nowledge-mem" }, "entries": { "openclaw-nowledge-mem": { "enabled": true, "config": { "autoRecall": true, "autoCapture": false, "maxRecallResults": 5 } } } } } ``` Restart OpenClaw and verify ```bash openclaw nowledge-mem status ``` If Nowledge Mem is reachable, you're done. Verify It Works (1 Minute) [#verify-it-works-1-minute] In OpenClaw chat: 1. `/remember We chose PostgreSQL for task events` 2. `/recall PostgreSQL` — should find it immediately 3. `/new` — start a fresh session 4. Ask: `What database did we choose for task events?` — it remembers across sessions 5. Ask: `What was I working on this week?` — weekly activity view 6. Ask: `What was I doing on February 17?` — down to the exact day 7. `/forget PostgreSQL task events` — clean deletion If all seven steps work, the memory system is fully running. What You Can Do [#what-you-can-do] **Remember anything, forever** Tell the AI `/remember We decided against microservices — the team is too small`. Next week, in a different session, ask "what was that decision about microservices?" It finds it. **Browse your work by date** Ask "what was I doing last Tuesday?" and the AI lists everything you saved, documents you added, and insights generated that day. You can ask for a specific date — not just "the past N days." **Trace a decision's history** Ask the AI "how did this idea develop?" and it shows you: the original source documents that informed it, which related memories were synthesized into a higher-level insight, and how your understanding changed over time. **Start every session already in context** Every morning, the Knowledge Agent produces a daily briefing: what you're focused on, open questions, recent changes. Your agent reads it at the start of every session. You never repeat yourself. **Save knowledge with structure, not just text** When you ask the AI to remember something, it doesn't just store text — it records the type (decision, learning, preference, plan…), when it happened, and links it to related knowledge. Searching by type, by date, by topic all work because the structure is there. **Slash commands**: `/remember`, `/recall`, `/forget` How the Hooks Work [#how-the-hooks-work] Both `autoRecall` and `autoCapture` run in the background via plugin lifecycle hooks — they are not AI decisions. The agent never calls a hidden "save" function. The plugin code fires at specific moments, regardless of what the agent is doing. autoRecall — What happens at session start [#autorecall--what-happens-at-session-start] Before the agent sees your message, the plugin silently: 1. Reads your **Working Memory** — the daily briefing the Knowledge Agent generates each morning (focus areas, open questions, recent changes) 2. Searches your knowledge graph for **memories relevant to your current prompt** 3. Prepends both as invisible context to the system prompt, along with guidance on which Nowledge Mem tools are available The agent starts each session already aware of your context. You don't ask for it. It just works. autoCapture — What happens at session end [#autocapture--what-happens-at-session-end] By default, the agent only saves when you ask it to (`autoCapture: false`). Turn it on to capture automatically: ```json "autoCapture": true ``` At the end of each session (and at context compaction and reset), **two independent things happen**: **1. The full conversation is saved as a thread.** Every message — yours and the agent's — is appended to a persistent thread in Nowledge Mem, keyed to this session. This happens unconditionally on every successful session end, no matter what was said. You can browse threads chronologically with `nowledge_mem_timeline`, or search them from any tool. **2. A memory note may be extracted.** If your last message contains a decision, preference, or stated fact — for example "I prefer TypeScript" or "we decided against microservices" — a separate structured memory is also created. Questions, short messages, and slash commands are skipped. The memory note is independent of the thread: both can happen, one, or neither. **Context compaction** is when OpenClaw compresses a long conversation to fit the model's context window. The plugin captures the transcript at that moment too — messages that get compressed away still end up in your knowledge base. Messages are deduplicated — if the plugin fires at both session end and reset, you won't get duplicate entries. Use Across Multiple Machines [#use-across-multiple-machines] If OpenClaw runs on a different machine than Nowledge Mem, add your server address to the plugin config: ```json "apiUrl": "https://your-nowledge-mem-url", "apiKey": "nmem_..." ``` Or via environment variables: ```bash export NMEM_API_URL="https://your-nowledge-mem-url" export NMEM_API_KEY="nmem_..." ``` The API key is passed only through the process environment — it never appears in logs or command history. See [Access Mem Anywhere](/docs/remote-access). Troubleshooting [#troubleshooting] **Plugin is installed but OpenClaw isn't using it** Check that `plugins.slots.memory` is exactly `openclaw-nowledge-mem`, and that you restarted OpenClaw after editing the config. **"Duplicate plugin id detected" warning** This happens if you previously installed the plugin locally (e.g. with `--link`) and then installed from npm. OpenClaw is loading it from both places. Fix it by removing the local path from your config: Open `~/.openclaw/openclaw.json` and delete the `plugins.load.paths` entry that points to the local plugin directory: ```json "load": { "paths": [] } ``` Then restart OpenClaw. The warning will be gone and only the npm-installed version will load. **Status shows not responding** ```bash nmem status curl -sS http://127.0.0.1:14242/health ``` **Search returns too few results** Raise `maxRecallResults` to `8` or `12`. Why Nowledge Mem? [#why-nowledge-mem] Other memory tools store what you said as text and retrieve it by semantic similarity. Nowledge Mem is different. **Knowledge has structure.** Every memory knows what type it is — decision, learning, plan, preference — when it happened, which source documents it came from, and how it relates to other memories. That's what makes search precise and reasoning reliable. **Knowledge evolves.** The understanding you wrote today connects to the updated version you saved three months later. You can see how your thinking changed, without losing the intermediate steps. **Knowledge has provenance.** Every piece of knowledge extracted from a PDF, document, or web page links back to its source. When the AI says "based on your March design doc," you can verify it. **Knowledge travels across tools.** What you learned in Cursor, saved in Claude, refined in ChatGPT — all available in OpenClaw. Your knowledge belongs to you, not to any one tool. **Local first, no cloud required.** Your knowledge lives on your machine. Remote access is available when you need it, not imposed by default. How search ranking works: [Search & Relevance](/docs/search-relevance). For Advanced Users [#for-advanced-users] OpenClaw's `MEMORY.md` workspace file still works for workspace context. Memory tool calls are handled by Nowledge Mem, but both can coexist. The plugin communicates with Nowledge Mem through the `nmem` CLI. Local and remote modes behave identically — configure the address once and every tool call routes correctly. References [#references] * Plugin source: [nowledge-mem-openclaw-plugin](https://github.com/nowledge-co/community/tree/main/nowledge-mem-openclaw-plugin) * OpenClaw docs: [Plugin system](https://docs.openclaw.ai/tools/plugin) * Changelog: [CHANGELOG.md](https://github.com/nowledge-co/community/blob/main/nowledge-mem-openclaw-plugin/CHANGELOG.md) # Search Through Time (/docs/use-cases/bi-temporal) import VideoPlayer from "@/components/ui/video-player" import { Step, Steps } from 'fumadocs-ui/components/steps'; The Problem [#the-problem] The board asks: *"Why did you choose React Native over Flutter in Q1?"* You remember the decision. But you remember it through the lens of everything that happened after: the pivot, the performance issues, the rewrite. You need to answer: **What did you know THEN?** > "I can search my notes for 'React Native'. But I can't search for 'what I believed in March about React Native'." The Solution [#the-solution] Nowledge Mem uses **bi-temporal search**: two dimensions of time that let you find exactly what you're looking for. Bi-temporal Search **Event Time**: When did the thing actually happen? **Record Time**: When did you capture it? Search either. Search both. Travel through your own history. Search Query Details Blog: [How We Taught Nowledge Mem to Forget](https://nowledge-labs.ai/blog/memory-decay-temporal). Documentation about [Search & Relevance](/docs/search-relevance). How It Works [#how-it-works] Natural Language Queries [#natural-language-queries] Just search naturally. Nowledge Mem understands temporal intent: > "What did I decide about React Native in Q1 2024?" The system: 1. Detects temporal intent: "Q1 2024" 2. Searches memories where the **event** occurred in that period 3. Returns results with original context No special syntax needed. Explicit Temporal Filters [#explicit-temporal-filters] For precise control, use the advanced search: | Filter | Meaning | Example | | -------------------- | --------------------- | ---------- | | **Event Date From** | Event happened after | 2024-01-01 | | **Event Date To** | Event happened before | 2024-03-31 | | **Record Date From** | Written down after | 2024-01-01 | | **Record Date To** | Written down before | 2024-12-31 | **Power Query Example:** > Event Time: March 2024 > Record Time: Any Returns: *"All memories about events from March 2024, regardless of when you recorded them."* Flexible Date Precision [#flexible-date-precision] Nowledge Mem handles flexible dates: * **Year**: "2024" -> Matches anything in 2024 * **Month**: "2024-03" -> Matches March 2024 * **Day**: "2024-03-15" -> Matches that specific day The system preserves your original precision and displays accordingly. Knowledge Evolution [#knowledge-evolution] Bi-temporal search gets even more powerful with Knowledge Evolution. Background Intelligence automatically detects when your thinking on a topic changes: **Tuesday**: You save "Using PostgreSQL for the new service." **Thursday**: You mention CockroachDB as a migration target. **Friday**: Background Intelligence links them with an EVOLVES relationship and flags the tension. Now when you search "database decisions," you don't just get isolated memories. You get the **evolution chain**: the original decision, the update, and the relationship between them. You can see exactly how your thinking shifted and when. Evolution types: * **Replaces**: Newer information makes older obsolete * **Enriches**: Newer adds detail to older * **Confirms**: Same conclusion from a different source * **Challenges**: Contradictory information flagged for review Real Examples [#real-examples] Board Retrospective [#board-retrospective] > **Query**: "architecture decisions in Q1 2024" > > **Result**: Original decision memos with Q1 context, plus evolution chains showing how decisions changed after Compliance Audit [#compliance-audit] > **Query**: "security policies before the incident" > > **Result**: What policies existed before the breach, with record timestamps proving when they were documented Project Post-Mortem [#project-post-mortem] > **Query**: "project-x assumptions from kickoff" > > **Result**: Original assumptions that turned out wrong, linked to the later insights that proved them wrong Knowledge Graph + Time [#knowledge-graph--time] Your graph view has a **timeline slider** that filters nodes and edges by date range. Set the range to "March 2024" and see: * Only entities that existed then * Only connections that were known then * The state of your knowledge at that moment Drag the slider forward and watch your understanding evolve. Play the animation to see knowledge accumulate over time. How Memory Decay Works [#how-memory-decay-works] Not all memories age equally. Like your brain, Nowledge Mem: * **Prioritizes recent memories** by default (30-day half-life) * **Boosts frequently accessed** memories (logarithmic scaling) * **Respects importance** scores you set (importance floor prevents full decay) * **Learns from your behavior** (clicks, dwell time) This means casual searches surface fresh, relevant results, but temporal searches bypass decay to find exactly what you asked for. Temporal intent detection requires **Deep Mode** search. In Fast Mode, temporal references are matched by keywords only. Enable Deep Mode for queries like "recently working on" or "decisions from last quarter." See [Search & Relevance](/docs/search-relevance) for the full technical breakdown of how scoring, decay, and temporal matching work. The Two Times [#the-two-times] Understanding the difference is key: | Question | Which Time? | | ------------------------------------ | ----------- | | "What did I decide in March?" | Event Time | | "What did I write last week?" | Record Time | | "Show recent notes about old events" | Both | | "What did I know before the pivot?" | Event Time | Most searches use **event time** because you're asking about when things happened. **Record time** is useful for: * Finding recent captures * Reviewing what you've been documenting * Auditing when knowledge was recorded Why This Matters [#why-this-matters] Traditional search finds content. Temporal search finds **context**. Knowledge Evolution finds **the story**. > "We didn't make a bad decision. We made the best decision with what we knew. Here's the proof. And here's exactly when and why our thinking changed." Your memories are time-stamped, version-controlled, and historically accurate. Next Steps [#next-steps] * [Own Your Knowledge](/docs/use-cases/shared-memory) -> Use any tool without losing context * [See Your Expertise](/docs/use-cases/expertise-graph) -> Visualize your knowledge * [Background Intelligence](/docs/advanced-features) -> Knowledge graph capabilities # See Your Expertise (/docs/use-cases/expertise-graph) import VideoPlayer from "@/components/ui/video-player" import { Step, Steps } from 'fumadocs-ui/components/steps'; The Problem [#the-problem] You've been learning for years. Building expertise. Accumulating knowledge. But can you see it? > I know I'm good at... stuff. Technical stuff. But if someone asked me to describe my expertise, I'd struggle. It's all intuition. Nothing concrete. Your knowledge is invisible. Scattered across memories, notes, conversations. You can't see the patterns. The connections. The clusters of expertise. The Solution [#the-solution] Nowledge Mem visualizes your knowledge as a **living graph**. Nodes are your memories and entities. Edges are relationships. And the graph **builds itself**: Background Intelligence automatically extracts entities and relationships from your memories overnight. Run **community detection** and watch your expertise clusters emerge: Expertise Graph How It Works [#how-it-works] The Graph Builds Itself [#the-graph-builds-itself] You don't need to manually tag or categorize anything. Background Intelligence reads your memories and extracts: * **Entities**: Technologies, people, concepts, projects * **Relationships**: How they connect to each other * **Evolution chains**: How your thinking on a topic has changed This happens automatically. Save memories through any channel (auto-sync, browser extension, Timeline, `/sum`) and the graph grows on its own. Automatic entity extraction requires a [Pro license](/docs/mem-pro) and a configured Remote LLM. Run Community Detection [#run-community-detection] In the right panel, find **Graph Algo** and click Compute under **Clustering**. The Louvain algorithm analyzes your knowledge structure and finds natural clusters: | Community | Size | Theme | | ------------------- | ----------- | ----------------------------- | | Distributed Systems | 87 memories | Backend architecture, scaling | | Team Leadership | 45 memories | Mentoring, communication | | Performance | 62 memories | Optimization, profiling | | Side Projects | 23 memories | Creative experiments | Each cluster gets a colored "bubble" around its nodes. Travel Through Time [#travel-through-time] The **timeline slider** at the bottom of the graph lets you filter by date range. Drag to "January 2024" and see your knowledge at that point. Drag forward and watch new clusters form, existing ones grow, and connections multiply. Play the animation to watch your expertise evolve over months. See when a new interest emerged, when it connected to existing knowledge, and when it grew into a full cluster. Explore and Discover [#explore-and-discover] Navigate the graph: * **Click** any node to see its details * **Double-click** to expand neighbors * **Shift+drag** to lasso-select multiple nodes * **Press C** to toggle community bubbles * **Press E** to expand selected node's neighbors Find patterns you never noticed: > Every leadership memory links back to debugging sessions. I lead by teaching debugging. What You'll Discover [#what-youll-discover] Expertise Clusters [#expertise-clusters] Community detection reveals where your knowledge naturally groups: * **Core strengths**: Large, dense clusters * **Emerging areas**: Small but growing clusters * **Bridges**: Nodes that connect multiple clusters (often your most unique skills) Knowledge Evolution [#knowledge-evolution] Background Intelligence tracks how your thinking changes: * **Tuesday**: "Using PostgreSQL for the new service" * **Thursday**: "Considering CockroachDB for migration" * **Friday briefing**: "Your database thinking is evolving" These evolution chains appear as linked nodes in the graph. You can see exactly where your opinions shifted and follow the trail. Hidden Patterns [#hidden-patterns] Explore and find: * Recurring themes you never consciously tracked * Connections between seemingly unrelated projects * Your unique perspective and approach * Gaps between related topics Asking AI About Your Graph [#asking-ai-about-your-graph] With your graph in view, ask AI Now to interpret it: > Based on my knowledge graph, what career paths fit me best? AI Now synthesizes: > Your memories show a unique intersection of deep systems knowledge with teaching ability. Your most central concepts (event-driven architecture, debugging) connect to both technical and leadership clusters. Consider: Staff Engineer, Developer Advocate, or Engineering Manager with technical focus. Other questions to try: * "What are my strongest expertise areas?" * "Where are the gaps in my knowledge?" * "What topics should I explore next?" * "How has my focus shifted over time?" The Compound Effect [#the-compound-effect] More memories = richer graph = deeper insights. **After 1 month:** > I can see my main topics, but clusters are small **After 6 months:** > Clear expertise areas. Unexpected connections emerging. Background Intelligence is finding patterns I missed. **After 1 year:** > I can literally see how my thinking has evolved. The connections I made last year laid groundwork for this year. **For performance reviews:** > I explored my graph before the review. Had concrete examples of growth across every dimension. Next Steps [#next-steps] * [Background Intelligence](/docs/advanced-features) -> How the graph grows automatically * [Own Your Knowledge](/docs/use-cases/shared-memory) -> Use any tool without losing context * [Search Through Time](/docs/use-cases/bi-temporal) -> Temporal queries and evolution chains # Overview (/docs/use-cases) import { Cards, Card } from 'fumadocs-ui/components/card'; import { Brain, Clock, FileText, Network, MessageSquare, Sparkles } from 'lucide-react'; Nowledge Mem learns from everything you do with AI. It auto-captures conversations, syncs sessions in real time, and builds a knowledge graph that grows overnight. Every connected tool starts with your full context. } href="/docs/use-cases/shared-memory" title="Own Your Knowledge"> Tell Claude once. Cursor already knows. One knowledge base across every AI tool you use. } href="/docs/use-cases/session-backup" title="Never Lose a Session"> Sessions auto-sync in real time. Claude Code, Cursor, Codex, ChatGPT -- every conversation captured. } href="/docs/use-cases/bi-temporal" title="Search Through Time"> The board asks why you chose React Native. Find what you believed then, not what you know now. } href="/docs/use-cases/notes-everywhere" title="Your Notes, Everywhere"> Obsidian, Notion, PDFs, Word docs. One search covers all your knowledge sources. } href="/docs/use-cases/expertise-graph" title="See Your Expertise"> The graph builds itself from your memories. Community detection reveals expertise clusters you didn't know you had. } href="/docs/ai-now" title="AI Now"> A personal AI agent with your full knowledge. Deep research, file analysis, presentations — purpose-built capabilities on your machine. Three Things That Change [#three-things-that-change] **It captures automatically.** The browser extension grabs insights from ChatGPT, Claude, Gemini, and 13+ platforms. Sessions from Claude Code, Cursor, and Codex sync in real time. You stop copying and pasting between tools. **It learns while you sleep.** Background Intelligence detects when your thinking evolves, synthesizes reference articles from scattered memories, and flags contradictions. Your morning briefing at `~/ai-now/memory.md` tells your AI tools what you're working on before you say anything. **It goes where you go.** One command connects 20+ AI agents. Switch tools freely. Your knowledge stays. How It Works [#how-it-works] 1. **Capture** -- browser extension, session sync, or type it into the Timeline 2. **Connect** -- the system links it to everything you already know 3. **Grow** -- Background Intelligence builds evolution chains, crystals, and flags overnight 4. **Use** -- any connected tool finds it when it's relevant Your knowledge compounds in Mem, independent of any single tool. Ready to Start? [#ready-to-start] Pick a use case above, or go straight to [Getting Started](/docs/getting-started) to set up Nowledge Mem. # Your Notes, Everywhere (/docs/use-cases/notes-everywhere) import VideoPlayer from "@/components/ui/video-player" import { Step, Steps } from 'fumadocs-ui/components/steps'; The Problem [#the-problem] You've been taking notes for years. Obsidian. Notion. Maybe both. Thousands of entries. Carefully tagged. Extensively linked. And yet... > I know I wrote about this. I just can't find it. The search is useless. The tags don't help. Worse: Your AI assistant has no idea any of this exists. You're explaining context that's already in your notes. Every. Single. Time. The Solution [#the-solution] We don't replace your note app. We **wire it into your knowledge**. Keep using Obsidian, Notion, Apple Notes or folders of Markdown files exactly as you do now. Nowledge Mem connects to them, making your notes searchable alongside your memories, by AI Now, and by any AI tool via MCP. And with the **Library**, you can drop PDFs, Word documents, and presentations in too. Everything becomes searchable from one place. Notes Everywhere How It Works [#how-it-works] Connect Your Notes [#connect-your-notes] **Obsidian:** 1. Open AI Now in Nowledge Mem 2. Go to **Plugins** -> Enable **Obsidian Vault** 3. Set your vault path (e.g., `/Users/you/Documents/ObsidianVault`) 4. Done. AI Now can now search your vault Notes Everywhere **Notion:** 1. Open AI Now -> **Plugins** -> Enable **Notion** 2. Click **Connect with Notion** 3. Authorize access in the browser popup 4. Your workspace is now accessible Import Documents to the Library [#import-documents-to-the-library] Drop files directly into the Timeline input or open the Library view: | Format | Extensions | What Happens | | ----------------- | ----------- | -------------------------------------------- | | **PDF** | .pdf | Text extracted, split into segments, indexed | | **Word** | .docx, .doc | Parsed to text, segmented, indexed | | **Presentations** | .pptx | Slide content extracted and indexed | | **Markdown** | .md | Parsed and indexed directly | Once indexed, document content is searchable alongside your memories and notes. Search Across Everything [#search-across-everything] Ask AI Now any question: > What do my notes say about quantum computing? AI Now: 1. Searches your Obsidian vault 2. Searches your Notion workspace 3. Searches your Nowledge memories 4. Searches your Library documents 5. Combines and synthesizes results One question. All your knowledge sources. Distill Into Memories [#distill-into-memories] Found valuable notes? Turn them into permanent memories: > Distill the key insights from these quantum computing notes AI Now creates: * **Insight**: "Quantum error correction requires O(n^2) qubits" * **Decision**: "Focus on NISQ algorithms for near-term research" * **Fact**: "IBM claimed quantum advantage Dec 2023" These memories are now: * Searchable with semantic understanding * Connected in the knowledge graph * Accessible to ALL your AI tools via MCP * Part of your Working Memory briefing when relevant Obsidian Integration [#obsidian-integration] Setup [#setup] Open Nowledge Mem Click the AI Now tab Go to **Plugins** in the sidebar Find **Obsidian Vault** and toggle it on Enter your vault path Example: `/Users/yourname/Documents/ObsidianVault` What You Can Do [#what-you-can-do] Once connected: * Search notes by content: *"Find my notes about machine learning"* * Read specific notes: *"Show me the note about project kickoff"* * Reference in context: *"Based on my Obsidian notes about X, help me..."* Your vault is read locally. Notes are never uploaded anywhere. Nowledge Mem just reads the files on your machine. Notion Integration [#notion-integration] Setup [#setup-1] Open AI Now -> **Plugins** Find **Notion** and click **Connect** Authorize in the browser popup Select the workspaces you want to connect What You Can Do [#what-you-can-do-1] * Search your workspace: *"Find pages about quarterly planning"* * Read page content: *"What's in my Product Roadmap page?"* * Cross-reference: *"Compare my Notion notes with my memories about X"* * Deep Research with both public information and private knowledge: *"What's the latest on quantum computing?"* Notion uses secure OAuth. You control exactly which pages Nowledge Mem can access. Revoke anytime from Notion settings. Built-in Integrations [#built-in-integrations] Some tools have Nowledge Mem built in: * **DeepChat**: Toggle Nowledge Mem in settings. Your memories become available in every chat. * **LobeHub**: Install from the marketplace. Full MCP integration. Coming Soon [#coming-soon] * **Apple Notes** integration Join the [Community](/docs/community) to request integrations. Next Steps [#next-steps] * [AI Now](/docs/ai-now) -> Learn what else AI Now can do * [Library](/docs/library) -> Import and search documents * [See Your Expertise](/docs/use-cases/expertise-graph) -> Visualize your knowledge graph * [Integrations](/docs/integrations) -> Full setup guides # Never Lose a Session (/docs/use-cases/session-backup) import VideoPlayer from "@/components/ui/video-player" import { Step, Steps } from 'fumadocs-ui/components/steps'; The Problem [#the-problem] You just had an epic debugging session. Three hours with Claude Code. You found a race condition, traced it through 15 files, built a bulletproof fix with tests. But AI conversations are ephemeral. Context gets compacted, token limits hit, and sessions expire. That 200-message thread? The early context is already gone. > "I solved this exact problem before. I just can't remember how. Or where. Or when." The Solution [#the-solution] Your sessions sync automatically. Claude Code, Cursor, Codex, and OpenCode conversations are captured in real time. Browser conversations from ChatGPT, Claude, and Gemini are grabbed by the extension. No commands to remember. No manual exports. When you're ready, distill a thread into permanent, searchable, graph-connected memories. How It Works [#how-it-works] Sessions Sync Automatically [#sessions-sync-automatically] **Claude Code and Codex (npx skills):** Install once: ```bash npx skills add nowledge-co/community/nowledge-mem-npx-skills ``` Sessions are saved automatically. The agent distills key insights at session end. **Cursor and OpenCode (Auto-Sync):** Nowledge Mem watches for new conversations in real time. Open **Threads** to see them appear as you work. No import step needed. **Browser (ChatGPT, Gemini, Claude Web):** The Exchange v2 extension captures conversations from 13+ AI chat platforms. Insights flow into Mem as you chat. **Manual save (any MCP tool):** ``` /save -> Checkpoint the full thread /sum -> Distill insights into memories ``` Distill Into Permanent Knowledge [#distill-into-permanent-knowledge] Open a saved thread and click **Distill**. The AI reads the entire conversation and extracts: * **Decisions**: "Chose sliding window over token bucket because..." * **Insights**: "Race conditions in async callbacks need mutex locks" * **Patterns**: "Testing time-based bugs requires mock clocks" * **Facts**: "Redis SETNX provides atomic lock acquisition" Each becomes a standalone, searchable memory with proper labels. Background Intelligence Connects It [#background-intelligence-connects-it] Your new memories don't sit in isolation. Background Intelligence: * Links them to previous work on the same codebase * Detects if they update or contradict earlier decisions * Connects them to related entities in the knowledge graph * Surfaces them in your next morning's Working Memory briefing Three months later, a colleague hits the same bug. Your briefing mentions it before they even ask. Search Anytime [#search-anytime] Three months later, similar bug appears: > Search: "payment race condition" Nowledge Mem returns the full context: the problem, the debugging steps, the solution, the test approach. No more re-solving solved problems. What Gets Captured [#what-gets-captured] | Source | How | What You Get | | --------------- | -------------------------------- | ------------------------------ | | **Claude Code** | npx skills (auto) or `/save` | Full session with code context | | **Codex** | npx skills (auto) or `/save` | Full session with code context | | **Cursor** | Auto-sync (real-time watching) | Conversations as they happen | | **OpenCode** | Auto-sync (real-time watching) | Conversations as they happen | | **ChatGPT** | Browser extension (auto-capture) | Insights from web chats | | **Claude Web** | Browser extension (auto-capture) | Insights from web chats | | **Gemini** | Browser extension (auto-capture) | Insights from web chats | | **13+ more** | Browser extension | Any supported AI chat platform | What Gets Extracted [#what-gets-extracted] When you distill a thread, the AI creates memories categorized by type: | Type | Example | Labels | | -------------- | --------------------------------------- | ---------------------- | | **Decision** | "Used Redis for distributed locking" | decision, architecture | | **Insight** | "Async callbacks need careful ordering" | insight, debugging | | **Procedure** | "Steps to reproduce race conditions" | procedure, testing | | **Fact** | "SETNX returns 1 if key was set" | fact, redis | | **Experience** | "Debugging session on payment service" | experience, project | The Compound Effect [#the-compound-effect] One thread saved is useful. Ten threads saved is a knowledge base. A hundred threads? That's institutional memory. > "Junior dev hit the same bug today. Sent them my memory. They fixed it in 20 minutes instead of 3 hours." Your debugging sessions aren't just conversations. They're training data for your future self. Pro Tips [#pro-tips] You don't need to distill every thread. Save important sessions: the breakthroughs, the architectural decisions, the hard-won solutions. For sensitive codebases, review what you're saving. Threads might contain proprietary code or credentials. Next Steps [#next-steps] * [Own Your Knowledge](/docs/use-cases/shared-memory) -> Use any tool without losing context * [Search Through Time](/docs/use-cases/bi-temporal) -> Find memories from specific time periods * [Integrations](/docs/integrations) -> Setup guides for each tool # Own Your Knowledge (/docs/use-cases/shared-memory) import VideoPlayer from "@/components/ui/video-player" import { Step, Steps } from 'fumadocs-ui/components/steps'; The Problem [#the-problem] You told Claude Code about your project architecture last week. Today, you're explaining it again to Cursor. Tomorrow, you'll try the new tool everyone's talking about, and start from scratch. This isn't a memory problem. It's a lock-in problem. Your knowledge is trapped inside whichever tool you used last. > "I already explained this. Why do I have to start over in a different tool?" The Solution [#the-solution] Nowledge Mem is a knowledge layer that sits between you and every AI tool you use. It captures your insights automatically, syncs your sessions in real time, and writes a daily briefing so every tool starts with your full context. One command to connect. Zero workflow changes. Shared Memory How It Works [#how-it-works] Connect in One Command [#connect-in-one-command] ```bash npx skills add nowledge-co/community/nowledge-mem-npx-skills ``` Works with Claude Code, Cursor, Codex, OpenCode, OpenClaw, Alma, and 20+ other agents. Installs four skills: Working Memory briefing, knowledge search, session saving, and insight capture. After setup, your agent reads your morning briefing at session start, searches your knowledge mid-task, and saves what it learns. Capture Happens Automatically [#capture-happens-automatically] You don't need to remember to save. Mem captures from multiple channels: **Browser Extension (Exchange v2):** The extension monitors your AI chats on ChatGPT, Claude, Gemini, and 13+ platforms. Insights are captured automatically as you work. **Session Auto-Sync:** Claude Code, Cursor, Codex, and OpenCode sessions sync in real time. A 3-hour debugging session is preserved without you typing a command. **Timeline Input:** Type a thought, paste a URL, drop a file. For the times you want to save something specific. **Manual Commands:** ``` /sum -> Summarize this conversation into memories /save -> Checkpoint the entire thread ``` Every Tool Starts Informed [#every-tool-starts-informed] Each morning, Background Intelligence writes a briefing to `~/ai-now/memory.md`. Every connected AI tool reads it at session start. Your agent already knows: * What you're working on * Decisions you made recently * Open questions and contradictions * How your thinking has evolved No re-explanation needed. Open Claude Code at 9 AM and it picks up where you left off. Switch Tools Freely [#switch-tools-freely] New tool? Connect it to Mem. It immediately has your full context. **Example:** You saved: *"Architecture decision: Using Redis for session management because..."* Later, in Cursor: *"Help me add session handling"* Cursor searches your knowledge, finds the Redis decision, applies the same pattern. No re-explanation needed. Real Example [#real-example] **Without Nowledge Mem:** > You: "Help me implement rate limiting" > > Claude: "What kind? Token bucket? Sliding window? What's your use case?" > > You: *\[Explains for the 5th time this month]* **With Nowledge Mem:** > You: "Help me implement rate limiting" > > Claude: *\[Reads your Working Memory briefing, searches your memories]* "Based on your decision last month to use sliding window rate limiting for the payment service, here's an implementation matching your Redis patterns..." What Gets Connected [#what-gets-connected] | Channel | How It Works | What Gets Captured | | --------------------- | -------------------------- | ---------------------------------------------------- | | **npx skills** | One command, 20+ agents | Working Memory, search, save, distill | | **Browser Extension** | Auto-capture from AI chats | Insights from ChatGPT, Claude, Gemini, 13+ platforms | | **Session Auto-Sync** | Real-time watching | Claude Code, Cursor, Codex, OpenCode sessions | | **MCP** | Direct protocol connection | Any MCP-compatible tool | | **Claude Desktop** | One-click extension | Full integration | | **Built-in** | Toggle in settings | DeepChat, LobeHub | The Compound Effect [#the-compound-effect] A few weeks in, any new tool you connect already knows how you work. Your preferences persist across tools. Your decisions compound. Every insight you've ever saved is available to every tool you'll ever use. The value lives in Mem, not in any single tool. Next Steps [#next-steps] * [Never Lose a Session](/docs/use-cases/session-backup) -> Auto-sync and backup AI conversations * [Search Through Time](/docs/use-cases/bi-temporal) -> Find what you knew when * [Integrations](/docs/integrations) -> Connect all your tools # 后台智能 (/docs/zh/advanced-features) import VideoPlayer from "@/components/ui/video-player" import { Step, Steps } from 'fumadocs-ui/components/steps'; 一月,你保存了一个使用 PostgreSQL 的决策。七月,你记录了正在迁移到 CockroachDB。你从未把两者联系起来。Nowledge Mem 做到了。它将它们关联,追踪演变,下次你搜索任何一个,两者都会出现,带着你的思维如何变化的完整故事。 这发生在你睡觉时。你打开应用,连接已经在那里了。
知识图谱 后台智能需要 Pro 许可证和已配置的远程 LLM。在 **设置 > 知识处理** 中启用。 知识图谱 [#知识图谱] 你保存的每条记忆都成为一个活的图谱中的节点。系统提取人物、技术、概念和项目,并将它们与你已有的知识关联。 结果是:搜索"分布式系统"就能找到你关于"Node.js 微服务"的记忆。用词不同,含义相通。 启用后台智能后,知识图谱提取会为新记忆自动运行。你也可以为旧记忆手动触发。 提取内容 [#提取内容] 当一条记忆被处理时,LLM 会识别: * **实体**: 人物、技术、概念、组织、项目 * **关系**: 实体之间如何相互关联 * **与现有知识的连接**: 与图谱中已有记忆的关联 你可以为任何记忆触发提取,方法是点击记忆卡上的 **Knowledge Graph** 按钮。 Distill with Knowledge Graph 知识演变 [#知识演变] 当你保存了一个之前写过的主题的新内容,系统检测到关系并创建版本链接: | 链接类型 | 发生了什么 | 示例 | | ------ | ------ | ---------------------------------- | | **替换** | 你改变了想法 | "使用 CockroachDB"替换"使用 PostgreSQL" | | **丰富** | 你学到了更多 | "React 19 新增编译器"丰富了"React 18 并发渲染" | | **确认** | 独立的认同 | 两篇独立评测推荐了同一个库 | | **挑战** | 检测到矛盾 | 你三月份的评估与十月份的结论不一致 | 你可以追踪对任何主题的理解如何随时间变化。看到你在哪里改变了想法。理解原因。 社区检测 [#社区检测] 图算法发现你知识中的自然集群:紧密连接的记忆组,形成连贯的主题。你的图谱可能揭示"React 模式"、"API 设计"和"数据库优化"的集群。一张你从未需要手动绘制的专业地图。 在 **图视图** 中,点击 **计算** 运行社区检测。 Graph Algorithm Compute 可视化探索 [#可视化探索] 你的知识,呈现为交互式网络。点击一条记忆,查看与它连接的一切。放大集群。追踪你从未想过要比较的主题之间的连接。
时间线滑块按日期范围过滤。观察某个领域的知识在数周或数月内如何增长。 系统会发现什么 [#系统会发现什么] 图谱是基础。在此之上,后台智能主动分析你的知识,并将发现呈现在 Timeline 中。 洞察 [#洞察] 最好的洞察是你自己不会发现的连接。 * **跨领域关联。** 三月你记录了 JWT refresh token 在支付服务中引发竞态条件。九月你在新认证服务中选了同样的 token 轮换方案。系统发现了:同一个失败模式,不同项目。 * **时间模式。** "你在两个月内第三次重新审视这个数据库迁移决策。"也许是时候做决定了。 * **被遗忘的上下文。** "你三月份的评估与十月份选择的方案相矛盾。"系统记住你写过什么,即使你自己忘了。 每条洞察都引用其来源。你可以自己追溯推理过程。 一个改变你思维方式的连接,胜过十个显而易见的陈述。严格的质量门控把噪音挡在外面。 结晶 [#结晶] 三个月内保存的五条关于 React 模式的记忆。散落在你的时间线中。难以拼凑。 Crystal 将它们综合为一篇参考文章。标注来源。新信息到达时自动更新。 你不需要请求 Crystal。当系统有足够的素材可以说出有用的东西时,它们会自动出现。 标记 [#标记] 有时系统发现的是问题,而非连接: | 标记类型 | 含义 | 示例 | | ------- | ---------- | -------------------------------------- | | **矛盾** | 两条记忆存在分歧 | "使用 JWT token" vs "Session cookie 更安全" | | **过时** | 更新的知识取代了旧的 | 一份 6 个月前的部署指南,已被最近的笔记覆盖 | | **待验证** | 强烈的论断,无佐证 | 一条没有支持证据的单独断言 | 每个标记出现在 Timeline 中。你可以忽略、确认或链接到解决方案。 Working Memory [#working-memory] 每天早上,一份简报出现在 `~/ai-now/memory.md`: * 基于近期活动的**活跃话题** * 需要你关注的**未解决标记** * 知识库的**近期变化** * 基于频率和近期度的**优先事项** 任何通过 MCP 连接的 AI 工具都在会话开始时读取这个文件。你的编程助手开口之前就知道你在做什么。 你可以直接编辑文件。你的更改会被尊重。 你的 Working Memory(`~/ai-now/memory.md`)可通过 MCP 被任何连接的 AI 工具读取。你的编程助手、写作工具和其他智能体在开始任务前会查看你正在做什么。 配置 [#配置] 在 **设置 > 知识处理** 中控制后台处理: Memory Processing Settings | 设置 | 默认值 | 控制内容 | | -------- | ------ | -------------------- | | **后台智能** | 关 | 所有后台处理的主开关 | | **每日简报** | 开(启用时) | 每日 Working Memory 生成 | | **简报时间** | 8 | 每日简报运行的时间(本地时间) | | **自动提取** | 开(启用时) | 新记忆的自动知识图谱丰富 | 在 Linux 服务器上,通过 CLI 配置: ```bash nmem config settings set backgroundIntelligence true nmem config settings set autoDailyBriefing true nmem config settings set briefingHour 8 ``` 下一步 [#下一步] * **[快速入门](/zh/docs/getting-started)**: Timeline、文档导入和所有添加知识的方式 * **[集成](/zh/docs/integrations)**: 通过 MCP 和浏览器扩展连接 AI 工具 * **[故障排除](/zh/docs/troubleshooting)**: 常见问题的解决方案 # AI Now (/docs/zh/ai-now) import { Callout } from 'fumadocs-ui/components/callout'; import { Step, Steps } from 'fumadocs-ui/components/steps'; import { Tab, Tabs } from 'fumadocs-ui/components/tabs'; import { Telescope, FileText, Pencil, Presentation, Download, Plane, FastForward } from 'lucide-react'; import VideoPlayer from "@/components/ui/video-player"; AI Now 是运行在你本地的个人 AI 智能体。它拥有你完整的知识库——你保存过的每个决策、洞察和文档。通过插件连接 Obsidian、Notion、Apple Notes 和任何外部服务。 它不是聊天机器人。它有精心打造的能力:多源深度研究、文件与数据分析加可视化、演示文稿实时预览与导出、旅行规划。每一项能力都建立在你的完整上下文之上——你过去的决策、你的模式、你的历史。
AI Now 需要配置 **远程 LLM**。 前往 **设置** → **远程 LLM** 进行设置,详情参考[远程 LLM](/zh/docs/usage#远程-llm)。 能力 [#能力] | 类别 | 功能 | | --------- | ---------------------------------------- | | **记忆搜索** | 通过语义理解找到相关记忆 | | **深度研究** | 结合你的记忆和网络搜索的多源研究 | | **文件分析** | 分析你提供的 Excel、CSV、Word、PDF 文件 | | **数据可视化** | 根据你的数据生成图表 | | **演示文稿** | 创建幻灯片,带实时预览和 PowerPoint 导出 | | **旅行规划** | 创建交互式逐日行程 | | **集成** | 连接 Notion、Obsidian、Apple Notes 和 MCP 服务器 | 快速入门 [#快速入门] 配置远程 LLM [#配置远程-llm] 前往 **设置** → **远程 LLM** 并添加你的 API 密钥。 打开 AI Now [#打开-ai-now] 点击侧边栏中的 **AI Now** 标签,或按 Cmd/Ctrl + 5 开始任务 [#开始任务] 问任何问题。AI Now 在相关时自动搜索你的记忆: > 我做过哪些关于缓存的架构决定? 它从你的记忆中提取上下文,搜索网络和已连接的笔记(Notion、Obsidian、Apple Notes),综合为一个答案。 你也可以拖入文件或文件夹进行分析,或让它基于你的知识生成报告。AI Now 在工作过程中自动创建或更新记忆。 在聊天中引用记忆 [#在聊天中引用记忆] 你可以在聊天中使用 @ 搜索记忆并在回答中提及它们。 深度研究 [#深度研究] AI Now 可以运行并行子任务,跨多个来源搜索并综合结果。 深度研究 在 AI-Now 聊天界面中点击 研究 切换以启用深度研究。 工作原理 [#工作原理] 提出研究问题: > 研究量子纠错的当前状态 AI Now 将: 1. 搜索你的记忆了解已有知识 2. 从多个角度搜索网络 3. 综合为一个答案 4. 引用来源并附带可靠性指标 技能 [#技能] 技能是针对特定任务的专门能力。 | 技能 | 启用的功能 | | ----------- | ---------------------- | | **文档** | Excel/CSV 分析、图表生成、文件操作 | | **演示文稿创建器** | 幻灯片生成,带实时预览和导出 | | **旅行规划器** | 交互式行程创建 | 在 **AI Now** → **插件** → **技能** 中启用技能。 文件分析 [#文件分析] 将文件或文件夹附加到你的对话中进行即时分析。 在 AI-Now 插件中切换 文档 技能以启用数据分析能力。 支持的文件 [#支持的文件] | 类型 | 扩展名 | AI Now 做什么 | | -------- | --------------- | -------------- | | **电子表格** | .xlsx、.xls、.csv | 分析数据、发现模式、生成图表 | | **文档** | .docx、.doc、.pdf | 总结、提取要点、回答问题 | | **代码** | .py、.js、.ts 等 | 审查、解释、建议改进 | 示例 [#示例] 1. 点击文件夹图标附加 `sales_q4.xlsx` 2. 问:"这个数据中的前 3 个趋势是什么?" 3. AI Now 分析并生成可视化 你也可以附加整个文件夹一次分析多个文件。 拖入文件夹即可分析: 数据分析 演示文稿 [#演示文稿] AI Now 可以创建带实时预览和编辑的演示文稿。 在 AI-Now 插件中切换 演示文稿 技能以启用演示文稿创建能力。 创建幻灯片 [#创建幻灯片] > 根据我们上面的研究创建一个演示文稿,包括一些图表或图形来支持洞察 AI Now 生成结构清晰、包含图表和洞察的幻灯片。 演示文稿创建 编辑 [#编辑] 生成后,通过后续请求进行优化: * "让第三张幻灯片更有视觉效果" * "添加一张关于客户细分的幻灯片" * "简化结论" 或者,点击 编辑 按钮编辑演示文稿。 导出 [#导出] 点击 PPTX 按钮下载为 PowerPoint(.pptx)以在其他工具中使用。 旅行规划 [#旅行规划] AI Now 可以创建详细的旅行行程。 在 AI-Now 插件中切换 旅行规划器 技能以启用旅行规划能力。 > 规划一个以美食和文化为重点的 5 天东京之旅 AI Now 生成一个交互式逐日行程,包含活动、地点和提示,以你最近的记忆和深度研究作为上下文。 旅行规划 插件 [#插件] 通过插件连接你的其他应用。 内置插件 [#内置插件] Obsidian [#obsidian] 连接你的本地 Obsidian 知识库: 1. 前往 **AI Now** → **插件** 2. 启用 **Obsidian** 3. 设置你的知识库路径 现在 AI Now 可以与你的记忆一起搜索和阅读你的 Obsidian 笔记。 Notion [#notion] 连接你的 Notion 工作区: 1. 前往 **AI Now** → **插件** 2. 启用 **Notion** 3. 点击 **连接** 并在浏览器中授权 AI Now 现在可以搜索你的 Notion 页面和数据库。 Apple Notes (macOS) [#apple-notes-macos] 在 macOS 上,AI Now 可以搜索你的 Apple Notes: 1. 前往 **AI Now** → **插件** 2. 启用 **Apple Notes** 3. 在提示时授予权限 自定义 MCP 插件 [#自定义-mcp-插件] AI Now 支持模型上下文协议(MCP)用于自定义集成。 前往 **AI Now** → **插件** → **自定义插件** 点击 **添加 MCP 服务器** 配置服务器(stdio 命令或 HTTP 端点) 点击 **测试连接** 进行验证 启用插件 带有 OAuth 的 MCP 插件(如 GitHub、Slack)会自动检测并提示你授权。 会话管理 [#会话管理] 对话自动保存。点击之前的会话恢复,或创建新会话并行处理不同工作流。 自动批准模式 [#自动批准模式] 为了更快的工作流程,启用 自动 以跳过文件操作和其他操作的确认提示。 自动批准授予 AI Now 在不询问的情况下采取行动的权限。仅在可信工作流程中启用。 提示 [#提示] * **要具体**: "我们上个月关于数据库迁移做了什么决定?"比"数据库相关的东西"效果更好 * **附加上下文**: 拖放文件或使用 `@` 引用特定记忆 * **使用会话**: 不同项目用不同会话 下一步 [#下一步] * **[远程 LLM 设置](/zh/docs/usage#远程-llm)**: 配置你的 AI 提供商 * **[集成](/zh/docs/integrations)**: 连接所有工具 * **[后台智能](/zh/docs/advanced-features)**: 你的知识如何自动成长 # 社区与支持 (/docs/zh/community) import { Card as RawCard, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card" import { MessageSquare, Twitter, Github, Mail, Users, BookOpen, MessageCircle, AlertTriangle, Lightbulb } from "lucide-react" 获取帮助、反馈问题、参与贡献。 社区频道 [#社区频道] 获得支持 [#获得支持] 文档 [#文档] 从安装到高级功能,文档覆盖了主要使用场景: * **[快速入门](/zh/docs/getting-started)** - 安装和首次使用 * **[集成](/zh/docs/integrations)** - MCP、浏览器扩展、CLI * **[后台智能](/zh/docs/advanced-features)** - 知识图谱、洞察(Insight)、结晶(Crystal)、工作记忆(Working Memory) * **[故障排除](/zh/docs/troubleshooting)** - 常见问题和解决方案 报告问题和请求功能 [#报告问题和请求功能] 发现 bug 或有功能建议?通过 GitHub Issues 提交: 邮件支持 [#邮件支持] 直接联系团队:
[hello@nowledge-labs.ai](mailto:hello@nowledge-labs.ai)
Pro 计划用户可以访问专属的 Pro Discord 频道和直接即时消息支持。[了解更多关于 Pro](/zh/docs/mem-pro)。 保持联系 [#保持联系] 关注最新动态: 1. **加入 [Discord](https://nowled.ge/discord)** 获取实时讨论和支持 2. **在 Twitter 上关注** ([@NowledgeMem](https://x.com/nowledgemem)) 获取产品更新 3. **关注我们的 GitHub 仓库** ([nowledge-co/community](https://github.com/nowledge-co/community)) 获取技术更新和发布 4. **查看博客** 在 [nowledge-labs.ai/blog](https://nowledge-labs.ai/blog) 获取深度文章 # 快速入门 (/docs/zh/getting-started) import VideoPlayer from "@/components/ui/video-player"; import { Step, Steps } from 'fumadocs-ui/components/steps'; Timeline [#timeline] 打开 Nowledge Mem,你看到的就是这个界面: Nowledge Mem Timeline Timeline 是你的主界面。 保存一个想法 [#保存一个想法] 输入你刚刚做的一个决定,或者一段对话中的洞察。按回车。 Mem 处理剩下的:标题、关键概念、图谱连接。你只管写。打开图谱视图,你会发现它已经和相关记忆连在一起了。 提一个问题 [#提一个问题] 输入一个问题:*"上个月我对认证方案做了什么决定?"* 答案来自**你自己的知识**,不是互联网。每个问题搜索你的全部记忆,从中综合出答案。 放入一个 URL 或文件 [#放入一个-url-或文件] 粘贴一个 URL,页面内容自动解析并索引。拖入 PDF、Word 文档、演示文稿,同样处理。每次输入都在扩展你的知识库。 Nowledge Mem Timeline 连接任何工具 [#连接任何工具] 一条命令搞定。 Claude Code、Cursor、Codex、OpenCode、OpenClaw、Alma 等 20+ 个智能体 [#claude-codecursorcodexopencodeopenclawalma-等-20-个智能体] 一条命令安装完整技能集(Working Memory 简报、知识搜索、会话保存、洞察捕获): ```bash npx skills add nowledge-co/community/nowledge-mem-npx-skills ``` 安装后,智能体每次会话带着你的上下文启动,工作中搜索知识,并保存它学到的东西。 如果你更喜欢轻量安装,打开 **设置 > 偏好设置**,在**开发者工具**部分安装 CLI 技能。这会给你的智能体提供核心的搜索和召回功能,不包含完整的自主工作流。 从设置安装技能 或直接配置 MCP [#或直接配置-mcp] 对于任何支持 MCP 的工具,将以下 JSON 添加到它的 MCP 设置中: ```json { "mcpServers": { "nowledge-mem": { "url": "http://127.0.0.1:14242/mcp", "type": "streamableHttp" } } } ``` Claude Desktop [#claude-desktop] [下载扩展](/zh/docs/integrations#claude-desktop),一键安装,无需配置。 详见[集成](/zh/docs/integrations)了解各工具的具体指南。 更多添加方式 [#更多添加方式] * **AI 对话**:[浏览器扩展](/zh/docs/integrations#browser-extension)可从 ChatGPT、Claude、Gemini 等 13+ 个平台自动捕获洞察 * **对话文件**:从 Cursor、ChatGPT 或 ChatWise [导入](/zh/docs/integrations#thread-file-import)导出的对话 * **手动创建**:在记忆视图中点击 **+ 创建**,或在任何终端中使用 `nmem m add`([CLI 参考](/zh/docs/cli)) 明天再来看看 [#明天再来看看] 用几天之后会发生什么: **周二**你保存了一个决策:"新服务用 PostgreSQL。"**周四**你提到 CockroachDB 可能是迁移目标。**周五早上**你的简报 `~/ai-now/memory.md` 写道:"你的数据库选型在演变。PostgreSQL 决策(周二)与 CockroachDB 考虑(周四)存在张力。"你没有主动关联这两件事。Mem 做了。 这就是**后台智能**: * **知识演变**:Mem 检测到你对同一话题的想法在变化,自动链接各个版本,保留完整的演变轨迹。 * **结晶 (Crystal)**:同一领域积累了足够多的记忆,Mem 综合为一篇可以引用的参考文章。 * **标记 (Flag)**:过去和现在的想法矛盾时,标记出现在 Timeline 中。你决定怎么处理。 * **工作记忆 (Working Memory)**:每天一份简报存在 `~/ai-now/memory.md`。AI 工具开始会话时自动读取,开口之前就知道你在做什么。 这一切不需要你操心。它自己出现在 Timeline 中。 后台智能需要 [Pro 许可证](/zh/docs/mem-pro)和已配置的远程 LLM。 下一步 [#下一步] * **[使用 Nowledge Mem](/zh/docs/usage)**:日常体验,搜索、简报,以及你的工具如何使用你的知识 * **[AI Now](/zh/docs/ai-now)**:基于你的知识库的个人 AI 助手 * **[后台智能](/zh/docs/advanced-features)**:你的知识如何自动成长 * **[集成](/zh/docs/integrations)**:连接你的 AI 工具 * **[随处访问 Mem](/zh/docs/remote-access)**:通过 URL + API key 在其他电脑、智能体节点和浏览器工具上访问 Mem # Nowledge Mem (/docs/zh) import { Card as RawCard, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card" import { ArrowRight, Zap, Bot, Network, Sparkles } from "lucide-react" import VideoPlayer from "@/components/ui/video-player" 你的 AI 工具什么都记不住。Nowledge Mem 记得。 保存一个决策、一条洞察、一次突破——它自动关联到你已有的一切知识。知识图谱随你工作而生长,追踪思维演变。你睡觉时,系统发现你漏掉的关联,早上给 AI 工具写一份简报。 你连接的每个工具共享同一个知识库。Claude Code、Cursor、Codex、ChatGPT,下周的新工具也一样。解释一次,所有工具都知道。
连接任何工具 [#连接任何工具] 支持 MCP 协议、浏览器扩展和直接插件。 基于 Skill 的插件,自主访问记忆 MCP 集成,搜索和创建记忆 一键安装扩展 从 ChatGPT、Gemini 等 13+ 个平台捕获对话 导入你的文档 [#导入你的文档] 拖入 PDF、Word 文档或演示文稿,自动解析并与记忆一起索引。在 Timeline 中提问时,答案同时来自文档和记忆。 本地优先隐私 [#本地优先隐私] 一切在你的设备上运行。没有云端,无需账号。需要更强处理能力时可以连接远程 LLM,但你的数据不会经过 Nowledge 服务器。 几分钟内上手 你的前五分钟 # 安装 (/docs/zh/installation) import { DragToApplicationsAnimation } from '@/components/docs/drag_install'; import { InstallationSteps } from '@/components/docs/installation-steps'; import { Step, Steps } from 'fumadocs-ui/components/steps'; import { Tab, Tabs } from 'fumadocs-ui/components/tabs'; import { ExternalLink, Download } from 'lucide-react'; import { Button } from '@/components/ui/button'; Nowledge Mem 目前处于 **私有 Alpha 测试** 阶段。获取下载权限: * **加入候补名单**:在[这里](https://nowled.ge/alpha)提交你的邮箱,我们将在几小时内向你发送下载链接。 * **立即获取访问权限**:[Pro 计划](/zh/pricing)订阅者可立即获得下载权限 已有访问权限?你可以在 Alpha 邀请邮件中找到下载链接。如果没有看到,请检查**垃圾邮件**收件箱。 系统要求 [#系统要求] 最低系统要求: | 要求 | 规格 | | ------------ | -------------------------------------------------------------------------------- | | **操作系统** | macOS 15 或更高版本,Apple Silicon

Windows 10 或更高版本 | | **内存 (RAM)** | 最低 16 GiB | | **磁盘空间** | 10 GiB 可用空间 | | **网络** | 如果使用网络代理,请确保绕过 `127.0.0.1` 和 `localhost` | **Linux 服务器**支持以无头模式运行。参阅 **[Linux 服务器部署](/docs/zh/server-deployment)** 指南,在没有桌面环境的服务器上运行 Nowledge Mem。 安装步骤 [#安装步骤] 步骤 1:安装应用程序 [#step-1-place-app] 将 Nowledge Mem 拖到 `/Applications` 文件夹。 从 Microsoft Store 安装。 在 [Microsoft Store](https://apps.microsoft.com/detail/9ntrknn2w5dq?hl=zh-cn\&gl=CN\&ocid=pdpshare) 中搜索"Nowledge Mem",或点击下方按钮打开 Microsoft Store。 点击 **安装** 按钮安装 Nowledge Mem。 Microsoft Store 安装 步骤 2:启动应用程序 [#step-2-first-boot] 双击"应用程序"文件夹中的 Nowledge Mem 图标以首次启动应用。 如果应用启动时间过长或显示错误: * **服务超时**:如果你看到"启动服务时间过长",这通常意味着全局代理阻止了对 `localhost` 的访问。禁用代理后重试。 * **macOS 版本**:确保你运行的是 macOS 15 或更高版本。不支持旧版本。 * **需要更多帮助?** 查看[故障排除指南](/zh/docs/troubleshooting)以查看日志并获取详细诊断。你可以与我们的社区分享日志或通过邮件寻求支持帮助。 安装完成后,Nowledge Mem 将自动启动。 要手动启动应用,请在 Microsoft Store 中点击 Nowledge Mem 的"打开",或点击开始菜单搜索"Nowledge Mem"。 如果应用启动时间过长或显示错误: * **服务超时**:如果你看到"启动服务时间过长",这通常意味着全局代理阻止了对 `localhost` 的访问。禁用代理后重试。 * **需要更多帮助?** 查看[故障排除指南](/zh/docs/troubleshooting)以查看日志并获取详细诊断。你可以与我们的社区分享日志或通过邮件寻求支持帮助。 步骤 3:下载 AI 模型 [#step-3-download-models] 启动 Nowledge Mem 后,你需要下载本地 AI 模型(总共约 2.4GB): * **Apple 芯片 Mac**:支持设备端 LLM。 * **Windows**:需要远程 LLM。 * **Intel Mac**:需要远程 LLM。 * **Linux**:需要远程 LLM。 **检查通知**:你会在应用右上角看到下载提示 **导航到模型**:点击通知按钮,或前往 **设置** → **模型** **安装模型**:点击 LLM 模型卡片上的 **安装** LLM 模型安装 下载将自动开始,你可以监控进度: LLM 模型安装进度 根据你的网络连接,下载可能需要 5-15 分钟。模型只需下载一次。 第四步:安装浏览器扩展 [#step-4-browser-extension] 你在 ChatGPT、Claude、Gemini 等平台上的 AI 对话中包含了许多决策和洞察,但关闭标签页后就会消失。**Nowledge Mem Exchange** 浏览器扩展可以自动捕获它们。 安装后,点击扩展图标打开侧边栏。在**设置**中配置你的 LLM 提供商以启用自动捕获。 ChatGPT、Claude、Gemini、Perplexity、DeepSeek、Kimi、Qwen、POE、Manus、Grok 等。扩展会监控你的对话并保存有价值的洞察:决策、发现和结论,日常问答则会自动跳过。详情请参阅[浏览器扩展指南](/zh/docs/integrations#browser-extension)。 下一步 [#下一步] * **[快速入门](/zh/docs/getting-started)**: 你的前五分钟 * **[集成](/zh/docs/integrations)**: 连接 Claude Code、Cursor 等 AI 工具 * **[Linux 服务器部署](/zh/docs/server-deployment)**: 在 Linux 服务器上以无头模式运行 # 集成 (/docs/zh/integrations) import VideoPlayer from "@/components/ui/video-player" import { McpServerView } from "@/components/docs/mcp" import { BrowserExtensionGuide } from "@/components/docs/browser-extension-guide" import { FileImportGuide } from "@/components/docs/file-import" import { InlineTOC } from 'fumadocs-ui/components/inline-toc'; import { Step, Steps } from 'fumadocs-ui/components/steps'; import { Button } from '@/components/ui/button'; import { Download } from 'lucide-react'; import { CodeXml } from 'lucide-react'; import { Files } from 'lucide-react'; import { Braces } from 'lucide-react'; import { FileText } from 'lucide-react'; Nowledge Mem 连接你今天用的工具,也连接你明天要换的新工具。知识留在一个地方,工具来去自由。 快速开始(一条命令) [#快速开始一条命令] 适用于 Claude Code、Cursor、Codex、OpenCode、OpenClaw、Alma 等 20+ 个智能体: ```bash npx skills add nowledge-co/community/nowledge-mem-npx-skills ``` 这将安装四个技能:**search-memory**、**read-working-memory**、**save-thread** 和 **distill-memory**。你的智能体每次会话开始时自带上下文,在需要时自动搜索知识,并在工作中保存重要发现。 | 我想要... | 使用 | | --------------------------------------------------------------- | ----------------------------------------------------------------- | | 在 **Claude Code、Codex、Cursor、OpenCode 或 Alma** 中使用 Nowledge Mem | npx skills(如上)或[工具专属设置](#claude-code) / [Alma 插件](#alma) | | 在 **OpenClaw** 中使用 Nowledge Mem | [OpenClaw 插件](#openclaw) | | 从 **Raycast** 搜索记忆 | [Raycast 扩展](#raycast) | | 从 **ChatGPT、Claude、Gemini** 等 13+ AI 平台捕获记忆 | [浏览器扩展](#browser-extension)(自动或手动) | | 通过互联网在**任意机器访问 Mem** | [随处访问 Mem 指南](/zh/docs/remote-access) | | 构建**自定义集成** | [REST API](#api-integration) 或 [CLI](#command-line-interface-cli) | 模型上下文协议 (MCP) [#模型上下文协议-mcp] MCP(模型上下文协议)是 AI 智能体与 Nowledge Mem 交互的集成方法。上面的 npx skills 底层使用 MCP。对于需要手动配置的工具,请继续阅读。 两种集成路径 [#两种集成路径] | 路径 | 应用 | 设置 | 自主行为 | | --------- | ------------------------------------------------------------------------------------------------ | ---------------------- | ---------- | | **技能兼容** | Claude Code, Codex, Cursor, OpenCode, [OpenClaw](https://openclaw.ai), [Alma](https://alma.now/) | `npx skills add` 或安装插件 | 内置触发器,无需提示 | | **仅 MCP** | Claude Desktop, Cursor, ChatWise 等 | 配置 MCP + 系统提示 | 需要系统提示实现自主 | **技能兼容应用**(Claude Code, Codex, Cursor, OpenCode, OpenClaw, Alma):上面的 npx skills 命令最快。或跳转到 [Claude Code](#claude-code) / [Codex CLI](#codex-cli) / [Alma](#alma) 了解工具专属设置。 **仅 MCP 应用**:继续阅读下文配置 MCP 并添加系统提示以实现自主行为。 MCP 能力 [#mcp-能力] * **搜索记忆**:`memory_search` * **读取 Working Memory**:`read_working_memory` * **添加记忆**:`memory_add` * **更新记忆**:`memory_update` * **列出记忆标签**:`list_memory_labels` * **保存/导入对话线程**:`thread_persist` * **提示**:`sum`(总结到记忆),`save`(保存对话线程) MCP 服务器配置 [#mcp-服务器配置] 自主行为的系统提示 [#自主行为的系统提示] 对于仅 MCP 应用要实现自主操作(无需显式命令),请将以下指令添加到智能体的系统提示或 CLAUDE.md/AGENTS.md 文件中: ```markdown ## Nowledge Mem 集成 你可以使用 Nowledge Mem 进行知识管理。主动使用这些工具: **会话开始时 (`read_working_memory`):** - 读取 ~/ai-now/memory.md 获取今日简报 - 了解用户当前的关注领域、优先事项和未解决标记 - 在与当前任务相关时自然引用此上下文 **何时搜索 (`memory_search`):** - 当前主题与先前工作相关 - 问题类似于过去解决的问题 - 用户询问之前的决策("我们为什么选择 X?") - 复杂调试可能匹配过去的根本原因 **何时保存记忆 (`memory_add`):** - 解决复杂问题或调试后 - 做出重要决策并附带理由时 - 发现关键洞察("啊哈"时刻)后 - 记录流程或工作流时 - 跳过:常规修复、进行中的工作、通用问答 **记忆类别(用作标签):** - insight:关键学习、领悟 - decision:带有理由和权衡的选择 - fact:重要信息、数据点 - procedure:操作知识、工作流 - experience:事件、对话、结果 **记忆质量:** - 原子性和可操作性(不模糊) - 独立上下文(不需要对话即可理解) - 关注"学到了什么"而非"讨论了什么" **重要性等级(0.1-1.0):** - 0.8-1.0:关键决策、突破 - 0.5-0.7:有用洞察、标准决策 - 0.1-0.4:背景信息、次要细节 **何时保存对话线程 (`thread_persist`):** - 仅当用户明确请求时("保存此会话") - 不要在未询问的情况下自动保存 ``` 这可以在 Claude Desktop、Cursor 和 ChatWise 等应用中启用自主记忆操作。 浏览器扩展 [#浏览器扩展] Nowledge Mem Exchange 是一款浏览器扩展,从 AI 对话中捕获记忆。支持 ChatGPT、Claude、Gemini 等 13+ 个平台,在 Chrome 侧边栏中与对话并排运行。
三种捕获方式 [#三种捕获方式] | 模式 | 工作方式 | 适用场景 | | -------- | ------------------ | ------------------ | | **自动捕获** | 监控你的对话,自主保存有价值的洞察 | 设置后即可忘记。扩展决定什么值得记住 | | **手动提炼** | 由你触发对特定对话的捕获 | 当你知道对话包含重要内容时 | | **对话备份** | 将完整对话作为线程导入,支持增量去重 | 归档整个对话,稍后在应用中提炼 | 自动捕获 [#自动捕获] 启用后,扩展监控 AI 对话并判断每次交互是否值得保存。筛选标准: * **精炼的结论**: 决策、计划、最终确定的方案 * **重要发现**: 突破性进展、关键发现 * **知识探索**: 深入研究、综合分析 日常问答和寒暄会被跳过。保存前会搜索现有记忆以去重,必要时更新已有记忆而非重新创建。 自动捕获需要配置 LLM 提供商。打开侧边栏,进入**设置**,添加你的 API 密钥。支持的提供商:OpenAI、Anthropic、Google、xAI、OpenRouter、Ollama 以及 OpenAI 兼容端点。 对话备份 [#对话备份] 对话备份将完整对话作为线程导入。它跟踪已同步内容,后续备份只捕获新消息(增量同步)。导入后可在应用内触发提炼,提取独立记忆。 对于本地 AI 编程助手,Nowledge Mem 还支持 **AI 对话发现(自动同步)**:可对 **Claude Code、Cursor、Codex、OpenCode** 做增量同步。 支持的平台 [#支持的平台] 该扩展支持所有主流 AI 聊天服务: | 平台 | 网站 | | -------------- | -------------------------- | | **ChatGPT** | openai.com, chatgpt.com | | **Claude** | claude.ai | | **Gemini** | gemini.google.com | | **Perplexity** | perplexity.ai | | **DeepSeek** | chat.deepseek.com | | **Kimi** | kimi.moonshot.cn | | **通义千问** | qwen.ai, tongyi.aliyun.com | | **POE** | poe.com | | **Manus** | manus.im | | **Grok** | grok.com, grok.x.ai, x.ai | | **Open WebUI** | localhost、私有 IP | | **ChatGLM** | chatglm.cn | | **MiniMax** | agent.minimaxi.com | 拥有已配置 LLM 的 Pro 用户可以为任何 AI 聊天网站自动生成处理器。导航到该网站,打开侧边栏,点击**生成处理器**。扩展会分析页面结构并自动创建自定义处理器。 让扩展接入「随处访问 Mem」 [#让扩展接入随处访问-mem] 如果你已经在桌面端通过 **设置 → 随处访问 Mem** 暴露了 Mem API: 1. 打开任意受支持的 AI 对话页面,并打开扩展侧边栏 2. 点击 **Settings** 3. 在 **Access Mem Anywhere** 粘贴: * `export NMEM_API_URL="https://"` * `export NMEM_API_KEY="nmem_..."` 4. 点击 **Fill URL + key** 5. 点击 **Save**,再点击 **Test connection** 完整流程(Quick link 与 Cloudflare 账号两种模式):[随处访问 Mem](/zh/docs/remote-access)。 下载 [#下载] 该扩展还支持将任何对话线程下载为 `.md` 文件,用于归档或分享。 } title="MD 格式参考"> MD 格式的示例对话文件 对话线程文件导入 [#对话线程文件导入] 将其他 AI 工具导出的对话文件上传到 Nowledge Mem。 AI 对话发现(自动同步) [#ai-对话发现自动同步] 直接在应用中发现并导入本地 AI 编程助手会话: | 客户端 | 同步方式 | 入口 | | --------------- | ----------- | -------------------- | | **Claude Code** | 自动发现 + 增量同步 | 对话线程 → 导入 → 查找 AI 对话 | | **Cursor** | 自动发现 + 增量同步 | 对话线程 → 导入 → 查找 AI 对话 | | **Codex** | 自动发现 + 增量同步 | 对话线程 → 导入 → 查找 AI 对话 | | **OpenCode** | 自动发现 + 增量同步 | 对话线程 → 导入 → 查找 AI 对话 | 批量导入(一次多个对话线程) [#批量导入一次多个对话线程] 从单个导出文件批量导入所有对话: | 来源 | 文件格式 | 如何导出 | | ------------ | ------------------ | ------------------- | | **ChatGPT** | `chat.html` | 设置 → 数据控制 → 导出数据 | | **ChatWise** | `.zip`(包含 JSON 文件) | 从 ChatWise 应用导出所有聊天 | 单个对话线程导入 [#单个对话线程导入] 导入单个对话: | 来源 | 文件格式 | 备注 | | ------------ | ------- | --------------------- | | **Cursor** | `.md` | 从 Cursor 导出对话 | | **ChatWise** | `.html` | 单个聊天 HTML 导出 | | **通用** | `.md` | 任何带有用户/助手格式的 markdown | 面向开发者的自定义导入工具: * **Thread API**: 通过 API 从你的工具创建对话线程 ([API 参考](https://mem.nowledge.co/docs/api/threads/post)) * **Markdown 格式**: 将对话转换为可导入的 `.md` 文件 ([格式参考](https://github.com/nowledge-co/nowledge-mem/blob/main/refs/nowledge_mem_exchange/example_conversation_file.md)) } title="创建对话线程 API"> 在 Nowledge Mem 中创建对话线程的 API 文档 } title="MD 格式参考"> MD 格式的示例对话文件 深度集成 [#深度集成] 以下工具提供一键设置、内置技能或原生记忆支持。**DeepChat** 和 **LobeHub** 将 Nowledge Mem 作为内置记忆选项。 Claude Desktop [#claude-desktop] 一键扩展,让 Claude Desktop 直接读写你的 Nowledge Mem 知识库。 下载扩展 安装扩展 确保你的系统已安装 Python 3.13。 打开 **Terminal.app** 并运行以下命令: ```bash which brew || /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" python3.13 --version || /opt/homebrew/bin/brew install python@3.13 ``` 1. 双击从浏览器下载区域下载的 `claude-dxt.mcpb` 文件 2. 在安装对话框中点击 **安装** 按钮 3. 重启 Claude Desktop 应用 安装扩展 你现在可以在对话期间随时让 Claude 将洞察保存到 Nowledge Mem、更新现有记忆或搜索你的知识库。 在 Claude Desktop 中使用 Mem > 注意,如果你无法在 Claude Desktop 中启用 Mem,请通过 `tail -n 20 -F ~/Library/Logs/Claude/mcp*.log` 检查日志并与我们分享。 Claude Code [#claude-code] Claude Code 支持技能 - 安装插件即可获得内置自主行为。无需系统提示或 MCP 配置。 通过 CLI 插件直接在 Claude Code 中使用 Nowledge Mem。插件自动: * 搜索知识库获取相关上下文 * 在关键发现时建议提炼 * 按请求保存会话 安装 Claude Code 插件 安装 Nowledge Mem 插件,让 Claude Code 自动搜索、保存和记录关键发现。插件通过 `nmem` CLI 与知识库交互。参见:[Claude Code 插件文档](https://docs.claude.com/en/docs/claude-code/plugins)。 ```bash # 添加 Nowledge 社区市场 claude plugin marketplace add nowledge-co/community # 安装 Nowledge Mem 插件 claude plugin install nowledge-mem@nowledge-community ``` **先决条件**:插件需要 `nmem` CLI。安装方法: ```bash # 选项 1(推荐):使用 uvx(无需安装) curl -LsSf https://astral.sh/uv/install.sh | sh uvx nmem --from nmem-cli --version # 选项 2:使用 pip 安装 pip install nmem-cli ``` **注意**:在 Windows/Linux 上安装了 Nowledge Mem 桌面应用时,`nmem` 已内置。在 macOS 或远程服务器上,使用 `uvx` 或手动安装。 **更新插件**:获取最新版本: ```bash claude plugin marketplace update claude plugin update nowledge-mem@nowledge-community # 重启 Claude Code 以应用更改 ``` 使用方法 在 Claude Code 聊天中使用 Nowledge Mem 的三种方式: **斜杠命令(快速访问)** 直接输入这些命令: * `/save` - 保存当前会话到 Nowledge Mem * `/sum` - 将对话洞察提炼成记忆 * `/search <查询>` - 搜索你的知识库 **自然语言** * 说 "保存此会话" 或 "保存这个对话" * Claude 会自动运行 `nmem t save --from claude-code` * 说 "提炼这个对话" 或 "保存关键洞察" * Claude 会分析并使用 `nmem m add` 创建结构化记忆 **自主(通过技能)** 插件包含四个自动运行的技能: * **Read Working Memory**: 在会话开始和上下文压缩后加载每日简报 * **Search Memory**: 引用过去工作时自动搜索 * **Distill Memory**: 在突破时刻建议提炼 * **Save Thread**: 根据明确请求保存会话 **生命周期 Hooks** 插件包含 [Claude Code hooks](https://code.claude.com/docs/en/hooks) 用于自动生命周期管理: | 事件 | 触发器 | 操作 | | ------------------ | ------ | ------------------------------------- | | `SessionStart`(启动) | 新会话开始 | 注入 Working Memory 简报 | | `SessionStart`(压缩) | 上下文压缩后 | 重新注入 Working Memory 并提示 Claude 保存重要发现 | Hooks 自动运行。Working Memory 在启动和压缩后注入上下文,Claude 始终了解你的当前优先事项。压缩后会提示 Claude 通过 `nmem m add` 保存重要发现。 **自主知识捕获** 创建自定义智能体,在工作中自动捕获洞察、决策和学习。 查看完整示例:**[AGENTS.md](https://github.com/nowledge-co/community/blob/main/examples/AGENTS.md)** 此示例展示了如何: * 创建用于自主知识捕获的记忆守护智能体 * 使用 5 种记忆类别(insight、decision、fact、procedure、experience) * 应用重要性评分(0.1-1.0) * 使用 `--source-thread` 将记忆链接到源对话线程 * 将智能体与 hooks 结合以实现全面的会话管理 `AGENTS.md` 文件遵循通用的 [agents.md 标准](https://agents.md/),可与任何 AI 编程智能体配合使用。 Codex CLI [#codex-cli] Codex 支持自定义提示 - 安装它们以获得内置斜杠命令。无需 MCP 配置。 通过 `nmem` CLI 和自定义提示在 Codex 中使用 Nowledge Mem。 **安装 nmem CLI** Codex 提示使用 `nmem` CLI 命令。安装方法: ```bash # 选项 1(推荐):使用 uvx(无需安装) curl -LsSf https://astral.sh/uv/install.sh | sh uvx --from nmem-cli nmem --version # 选项 2:使用 pip 安装 pip install nmem-cli ``` **注意**:在 Windows/Linux 上安装了 Nowledge Mem 桌面应用时,`nmem` 已内置。在 macOS 或远程服务器上,使用 `uvx` 或手动安装。 **安装自定义提示** 安装自定义提示,添加斜杠命令来保存会话和提炼洞察。 安装: > 全新安装: ```bash curl -fsSL https://raw.githubusercontent.com/nowledge-co/community/main/nowledge-mem-codex-prompts/install.sh | bash ``` > 更新安装: ```bash curl -fsSL https://raw.githubusercontent.com/nowledge-co/community/main/nowledge-mem-codex-prompts/install.sh -o /tmp/install.sh && bash /tmp/install.sh --force && rm /tmp/install.sh ``` 在 Codex 聊天中使用: **斜杠命令** 直接输入这些命令: * `/prompts:read_working_memory` - 加载每日 Working Memory 简报获取上下文 * `/prompts:save_session` - 使用 `nmem t save --from codex` 保存当前会话 * `/prompts:distill` - 使用 `nmem m add` 提炼洞察 或输入 `/` 并搜索 "memory"、"save" 或 "distill" 找到它们。 **故障排除** * **"Command not found: nmem"** → 使用 `uvx --from nmem-cli nmem --version` 或用 `pip install nmem-cli` 安装 * **"Command not found: uvx"** → 用 `curl -LsSf https://astral.sh/uv/install.sh | sh` 安装 uv,然后使用 `uvx --from nmem-cli nmem --version` 安装 nmem * **会话未列出** → 确保你在正确的项目目录中 DeepChat [#deepchat] DeepChat 内置 Nowledge Mem 集成,一键开启记忆保存和搜索。 在 DeepChat 中启用 MCP 在 设置 > MCP 设置 下切换开关 启用 Nowledge Mem 在 自定义服务器 下切换 nowledge-mem 开关 DeepChat 切换高亮 LobeHub [#lobehub] LobeHub(原 LobeChat)内置 Nowledge Mem 集成,一键开启记忆保存和搜索。 一键安装 使用一键安装功能直接在 LobeHub 中安装 Nowledge Mem: 点击 安装 按钮安装 Nowledge Mem LobeHub 插件。 LobeHub 安装演示 OpenClaw [#openclaw] [OpenClaw](https://openclaw.ai) 是一个开源 AI 智能体框架。装上这个插件,你的 OpenClaw 会拥有跨会话的持久记忆——它记住你上周说的话,记住你几个月前写入文档的决策,也记住你今天的工作重心。 源码:[`community/nowledge-mem-openclaw-plugin`](https://github.com/nowledge-co/community/tree/main/nowledge-mem-openclaw-plugin) 安装插件 ```bash openclaw plugins install @nowledge/openclaw-nowledge-mem ``` 在 `~/.openclaw/openclaw.json` 中启用 ```json { "plugins": { "slots": { "memory": "openclaw-nowledge-mem" }, "entries": { "openclaw-nowledge-mem": { "enabled": true, "config": { "autoRecall": true, "autoCapture": false, "maxRecallResults": 5 } } } } } ``` 重启 OpenClaw,运行 `openclaw nowledge-mem status` 确认连接正常 **你可以做什么:** * `/remember` 任何内容,新会话照样记得 * 问"上周二我在做什么",AI 列出那天的活动和决策 * 追溯一个想法的演化过程——哪些文档影响了它、它怎么随时间变化 * 每天早上 AI 自动读取今日简报(Working Memory),开口就在状态里 * 斜杠命令:`/remember`、`/recall`、`/forget` 5 分钟配置步骤与完整功能说明:[**OpenClaw × Nowledge Mem**](/zh/docs/integrations/openclaw) Alma [#alma] [Alma](https://alma.now/) 插件,提供持久化记忆工作流。 源码:[`community/nowledge-mem-alma-plugin`](https://github.com/nowledge-co/community/tree/main/nowledge-mem-alma-plugin) 克隆插件、安装依赖,并复制到 Alma 本地插件目录 ```bash git clone https://github.com/nowledge-co/community.git cd community/nowledge-mem-alma-plugin npm install mkdir -p ~/.config/alma/plugins/nowledge-mem cp -R . ~/.config/alma/plugins/nowledge-mem ``` 重启 Alma **插件提供:** * **工具集**:记忆 query/search/store/show/update/delete + 线程 search/show/create/delete + Working Memory * **命令面板动作**:状态检查、搜索、保存记忆、读取 Working Memory、保存当前线程 * **自动回忆 hook**:每个线程首条外发消息注入 Working Memory + 相关记忆 * **可选自动捕获 hook**:应用退出时保存当前线程 * **本地运行时**:使用 `nmem` CLI(回退 `uvx --from nmem-cli nmem`) Raycast [#raycast] 从 [Raycast](https://raycast.com) 搜索你的知识库。四个命令: 源码:[`community/nowledge-mem-raycast`](https://github.com/nowledge-co/community/tree/main/nowledge-mem-raycast) | 命令 | 功能 | | --------------------- | ------------------------------------------- | | **搜索记忆** | 语义搜索,显示相关度分数,可复制内容或标题 | | **添加记忆** | 保存记忆,设置标题、内容和重要性 | | **Working Memory** | 查看每日简报 | | **编辑 Working Memory** | 内联编辑 `~/ai-now/memory.md`,更改会被所有连接的 AI 工具尊重 | **Raycast Store**(即将上线):[Store 提交](https://github.com/raycast/extensions/pull/25451)合并后,在 Raycast Store 中搜索 "Nowledge Mem" 即可安装。 **从源码安装**(现在可用): ```bash git clone https://github.com/nowledge-co/community.git cd community/nowledge-mem-raycast npm install && npm run dev ``` 需要 Nowledge Mem 在本地运行。扩展通过 `localhost:14242` 的 HTTP API 进行搜索和创建记忆,并从 `~/ai-now/memory.md` 读取 Working Memory。 LLM 友好文档 [#llm-友好文档] 本文档站的每一页都可以作为干净的 Markdown 提供给 AI 智能体和 LLM。在请求任何文档页面时添加 `Accept: text/markdown` 请求头,即可获得 Markdown 而非 HTML: ```bash # 获取任意文档页面的 Markdown curl -H "Accept: text/markdown" https://mem.nowledge.co/docs curl -H "Accept: text/markdown" https://mem.nowledge.co/docs/getting-started curl -H "Accept: text/markdown" https://mem.nowledge.co/docs/integrations ``` 也提供专用端点: | 端点 | 返回内容 | | --------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- | | [`/llms-full.txt`](https://mem.nowledge.co/llms-full.txt) | 所有文档页面合并为一个文件 | | `/llms.mdx/docs/` | 单页 Markdown(例如 [`/llms.mdx/docs/getting-started`](https://mem.nowledge.co/llms.mdx/docs/getting-started)) | 无需认证。 API 集成 [#api-集成] RESTful API,完整访问你的知识库。 } href="/zh/docs/api" title="API 参考"> Nowledge Mem RESTful API 的完整文档。 } title="OpenAPI 规范"> openapi.json 命令行界面 (CLI) [#命令行界面-cli] `nmem` CLI 提供终端下的知识库访问,面向开发者和 AI 智能体。 安装 [#安装] | 平台 | 安装 | | ----------- | -------------------------- | | **macOS** | 设置 → 偏好设置 → 开发者工具 → 安装 CLI | | **Windows** | 随应用自动安装 | | **Linux** | 包含在 deb/rpm 包中 | 快速入门 [#快速入门] ```bash # 检查连接 nmem status # 搜索记忆 nmem m search "项目笔记" # 列出最近记忆 nmem m # 创建记忆 nmem m add "重要洞察" --title "项目学习" # 搜索对话线程 nmem t search "架构" # 通过 CLI 保存 Claude Code/Codex 会话 nmem t save --from claude-code nmem t save --from codex -s "完成工作的摘要" # 从内容创建对话线程 nmem t create -t "会话笔记" -c "关键讨论要点..." # 从文件创建对话线程 nmem t create -t "会议笔记" -f notes.md ``` AI 智能体集成 [#ai-智能体集成] CLI 设计用于 AI 智能体工作流程,支持 JSON 输出: ```bash # 获取 JSON 输出用于解析 nmem --json m search "API 设计" # 链接命令 ID=$(nmem --json m add "笔记" | jq -r '.id') nmem --json m update "$ID" --importance 0.9 # 多消息对话线程创建 nmem t create -t "会话" -m '[{"role":"user","content":"问"},{"role":"assistant","content":"答"}]' ``` 命令参考 [#命令参考] | 命令 | 别名 | 描述 | | --------------- | -------- | ------- | | `nmem status` | | 检查服务器连接 | | `nmem stats` | | 数据库统计 | | `nmem memories` | `nmem m` | 记忆操作 | | `nmem threads` | `nmem t` | 对话线程操作 | 完整的 CLI 文档,运行 `nmem --help` 或查看 GitHub 上的 CLI 参考 用 API 或 CLI 做了什么?在 Github IssuesDiscord邮件 中分享。优秀项目可获得 Pro 许可证。 下一步 [#下一步] 设置有问题?查看故障排除指南: * **[故障排除](/zh/docs/troubleshooting)** - 常见问题和解决方案 * **[后台智能](/zh/docs/advanced-features)** - 知识图谱、洞察和自主功能 # 资料库 (/docs/zh/library) import { Step, Steps } from 'fumadocs-ui/components/steps'; import VideoPlayer from "@/components/ui/video-player"
拖一份 40 页的架构评审文档到资料库。在 Timeline 问:*"评审文档里关于 API 限流怎么说的?"* 回答引用了文档第 12 页的内容,还关联到你三个月前保存的 Redis 限流决策。文档和记忆联合搜索。 资料库存放 PDF、Word 文件、演示文稿和 Markdown。内容被解析、分段、索引。导入的文档可通过 Timeline、全局搜索和 MCP 连接的 AI 工具检索。 支持的格式 [#支持的格式] | 格式 | 扩展名 | 处理方式 | | ------------ | ----------- | ----------- | | **PDF** | .pdf | 提取文本,分段,索引 | | **Word** | .docx, .doc | 解析为文本,分段,索引 | | **演示文稿** | .pptx | 提取幻灯片内容并索引 | | **Markdown** | .md | 直接解析并索引 | 添加文档 [#添加文档] 将文件拖入 Timeline 输入框,或在资料库视图中导入。 文档经过以下处理流程: 1. **解析**: 从文件格式中提取内容 2. **分段**: 切分为可搜索的段落 3. **索引**: 加入向量索引和关键词索引 处理状态可在资料库视图中查看。索引完成后,文档内容可通过 Timeline、全局搜索和 MCP 连接的 AI 工具检索。 搜索文档 [#搜索文档] 文档和记忆一起被搜索。在 Timeline 中问 *"Q4 报告对用户流失怎么说的?"*,搜索同时覆盖你的记忆和导入的文档。 在资料库视图中也可以直接浏览和搜索文档。 文档与记忆的关系 [#文档与记忆的关系] 文档是你知识库的来源,不是记忆本身。区别在于: * **记忆**是你或系统提炼的原子化洞察、决策和事实 * **文档**是你整体导入的参考材料 提炼一份文档时,其中的洞察会被提取为记忆并连接到知识图谱。文档本身留在资料库中作为来源。 下一步 [#下一步] * **[快速入门](/zh/docs/getting-started)**: Timeline 和所有添加知识的方式 * **[后台智能](/zh/docs/advanced-features)**: 导入的知识如何连接到你的图谱 * **[搜索与相关性](/zh/docs/search-relevance)**: 搜索如何在记忆和文档间排序结果 # Mem Pro 计划 (/docs/zh/mem-pro) import { Card as RawCard, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card" import { Badge } from "@/components/ui/badge" import { Button } from "@/components/ui/button" import { ArrowRight, Download } from "lucide-react" import { Step, Steps } from 'fumadocs-ui/components/steps'; 免费 vs Pro 计划 [#免费-vs-pro-计划] Nowledge Mem 分**免费**和 **Pro** 两个计划。Pro 提供无限记忆、远程 LLM 集成(BYOK)等高级功能。详细对比见[定价页面](https://mem.nowledge.co/zh/pricing)。 激活你的终身 Pro 许可证 [#激活你的终身-pro-许可证] 访问定价页面并点击 **终身 Pro** 按钮进入结账: 使用你的电子邮件地址完成付款。 此邮箱用于接收许可证密钥,并永久关联到你的 Pro 激活。 付款页面 你会收到一封包含许可证密钥的邮件。 你可以随时在 mem.nowledge.co/licenses 使用你的电子邮件地址检索许可证密钥。 打开 Nowledge Mem 并导航到 **设置** → **计划**: 免费计划 输入你的电子邮件地址和许可证密钥,然后点击 **激活许可证**: 激活 Pro 激活后显示 Pro 状态: 已激活 Pro 随时在 mem.nowledge.co/licenses 管理你的已激活设备。 激活或许可证问题?联系 [hello@nowledge-labs.ai](mailto:hello@nowledge-labs.ai)。 # 随处访问 Mem (/docs/zh/remote-access) import { Callout } from 'fumadocs-ui/components/callout'; import { Step, Steps } from 'fumadocs-ui/components/steps'; Nowledge Mem 可以通过 Cloudflare Tunnel 暴露本机 API。你会得到公网 URL,但每个请求仍然必须通过 Mem API Key 鉴权。 当你希望把 Mem 作为统一记忆中心,在电脑、智能体节点、浏览器工具之间共享时,就使用本指南。 先选连接方式 [#先选连接方式] | 类型 | 适用场景 | 你会得到的 URL | | ----------------- | --------- | ------------------------------------------ | | **快速链接** | 1 分钟内快速可用 | 随机 `*.trycloudflare.com` | | **Cloudflare 账号** | 日常长期稳定使用 | 你自己域名下的固定 URL(如 `https://mem.example.com`) | 开始前确认 [#开始前确认] 请从 **设置 → 随处访问 Mem → Guide** 打开本指南。 * 快速链接不需要 Cloudflare 账号,也不需要域名。 * Cloudflare 账号模式要求你已经有一个在 Cloudflare 托管的域名。 * 如果你还没有域名,先使用 **快速链接**。 * 在 Cloudflare 账号模式里,只有创建 hostname route 之后才会出现最终公网 URL。 路径 A:快速链接(无需账号) [#路径-a快速链接无需账号] 在 Mem 打开远程连接 [#在-mem-打开远程连接] 打开 **设置 → 随处访问 Mem**。 如果还需要局域网访问,可打开 **Allow devices on same Wi-Fi**。 选择 Quick link 并启动 [#选择-quick-link-并启动] 在 **Access from Anywhere** 选择 **Quick link**,点击 **Start**。 等待状态变为 **Live**。 复制 URL 和 API Key [#复制-url-和-api-key] 在 **Ready to connect** 区域复制: * **URL** * **API key** 需要换新 key 时点击 **Rotate**。 在另一台设备验证 [#在另一台设备验证] ```bash export NMEM_API_URL="https://" export NMEM_API_KEY="nmem_..." nmem status ``` 预期:`status ok`。 路径 B:Cloudflare 账号(固定 URL) [#路径-bcloudflare-账号固定-url] 你需要先在 Cloudflare DNS 中管理自己的域名(例如 `example.com`),才能拿到固定 URL。 创建 tunnel 并拿到 token [#创建-tunnel-并拿到-token] 在 Cloudflare Zero Trust 中: 1. 打开 NetworksConnectorsCreate a tunnel。 2. 点击 Select CloudflaredCloudflare Connectors 页面 3. 输入 tunnel 名称并点击 Save tunnel填写 tunnel 名称 4. 在 **Install and run connectors** 中,从命令复制 token,例如: ```bash sudo cloudflared service install ... ``` 在 Mem 桌面端中,你可以粘贴: * 原始 token;或 * 完整命令(支持 `service install `、`--token `、`--token=`)。 Mem 会自动提取 token。 从命令复制 token 创建 Public Hostname 路由 [#创建-public-hostname-路由] 在 tunnel 的路由 / hostname routes 页面: 1. 创建 hostname(如 `mem.example.com`)。 2. 绑定到你刚创建的 tunnel。 这一步会创建可用的固定公网 URL。 Hostname routes 列表 创建 hostname 路由 将 hostname 映射到本机 Mem API [#将-hostname-映射到本机-mem-api] 1. 打开 NetworksConnectors → 你创建的 tunnel。 进入 tunnel 详情 2. 在 Published application routes 点击 Add a published application route添加应用路由 3. 将 `mem.example.com` 映射到本机 Mem 服务: * Subdomain:`mem` * Domain:你在 Cloudflare 托管的域名 * Service Type:`HTTP` * Service URL:`http://127.0.0.1:14242` 不要追加 `/remote-api`。 映射到本机 Mem API 回到 Mem 保存并启动 [#回到-mem-保存并启动] 回到 设置随处访问 MemCloudflare account: * Public URL:`https://mem.example.com` * Tunnel token:粘贴原始 token 或完整 `cloudflared` 命令 然后: * 点击 Save * 点击 Start * 如需新 key,点击 Rotate * 点击 Copy 复制 URL 和 API key 在另一台设备验证 [#在另一台设备验证-1] ```bash export NMEM_API_URL="https://mem.example.com" export NMEM_API_KEY="nmem_..." nmem status ``` 预期:`status ok`。 在其他客户端使用 [#在其他客户端使用] nmem CLI [#nmem-cli] ```bash export NMEM_API_URL="https://" export NMEM_API_KEY="nmem_..." nmem status nmem m search "project notes" ``` 浏览器扩展(SidePanel) [#浏览器扩展sidepanel] 打开任意受支持的 AI 对话页面,然后打开 **Nowledge Mem Exchange** 侧边栏: 1. 点击 **Settings** 2. 在 **Access Mem Anywhere** 中粘贴从 Mem 桌面端复制的终端环境变量: ```bash export NMEM_API_URL="https://" export NMEM_API_KEY="nmem_..." ``` 3. 点击 **Fill URL + key** 4. 点击 **Save** 5. 点击 **Test connection**(应显示成功) 也可以在同一区域手动填写 URL + key。 OpenClaw 插件 [#openclaw-插件] 两种方式都可以,选你顺手的: **方式 A — 写入插件配置(推荐)** 在 `~/.openclaw/openclaw.json` 的插件条目里加上 `apiUrl` 和 `apiKey`: ```json { "plugins": { "slots": { "memory": "openclaw-nowledge-mem" }, "entries": { "openclaw-nowledge-mem": { "enabled": true, "config": { "autoRecall": true, "autoCapture": false, "maxRecallResults": 5, "apiUrl": "https://", "apiKey": "nmem_..." } } } } } ``` API key 只通过环境变量传给 `nmem` 子进程,不会出现在日志或命令行参数里。 **方式 B — 环境变量** 在启动 OpenClaw 前设置: ```bash export NMEM_API_URL="https://" export NMEM_API_KEY="nmem_..." ``` 两种方式效果相同。如果 OpenClaw 作为服务运行、或想让配置自成一体,用方式 A;想把密钥放在配置文件之外,用方式 B。 MCP / 智能体节点 [#mcp--智能体节点] MCP 客户端通过 HTTP 连接,需要在 `Authorization` 请求头中传入 API key。 **Cursor**(`~/.cursor/mcp.json` 或工作区 `.cursor/mcp.json`): ```json { "mcpServers": { "nowledge-mem": { "url": "https:///mcp", "type": "streamableHttp", "headers": { "APP": "Cursor", "Authorization": "Bearer nmem_..." } } } } ``` **Claude Desktop**(`~/Library/Application Support/Claude/claude_desktop_config.json`): ```json { "mcpServers": { "nowledge-mem": { "url": "https:///mcp", "type": "streamableHttp", "headers": { "APP": "Claude", "Authorization": "Bearer nmem_..." } } } } ``` **Codex CLI**(`~/.codex/config.toml`): ```toml [mcp_servers.nowledge-mem] url = "https:///mcp" [mcp_servers.nowledge-mem.http_headers] APP = "Codex" Authorization = "Bearer nmem_..." ``` **Claude Code / CI / 其他基于 Shell 的工具** — 环境变量也可以: ```bash export NMEM_API_URL="https://" export NMEM_API_KEY="nmem_..." ``` 快速健康检查 [#快速健康检查] ```bash curl -H "Authorization: Bearer $NMEM_API_KEY" "$NMEM_API_URL/health" ``` 预期:返回健康检查 JSON。 错误 key 检查: ```bash curl -H "Authorization: Bearer wrong_key" "$NMEM_API_URL/health" ``` 预期:`401`。 如果代理会剥离鉴权头: ```bash curl "$NMEM_API_URL/health?nmem_api_key=$NMEM_API_KEY" ``` 安全与运行建议 [#安全与运行建议] * 每个远程请求都必须携带 API key。 * 可随时在设置中 **Rotate**(旧 key 立即失效)。 * 首次成功 **Start** 后,应用重启会自动重连,直到你点击 **Stop**。 * Browse-Now / Browser Bridge 自动化端点仅限本机访问,不会通过「随处访问 Mem」暴露。 * 不需要远程访问时请关闭 tunnel。 常见问题 [#常见问题] * **Start 超时**:网络/代理可能拦截了 Cloudflare 流量,重试或切换到账号模式。 * **`401 Missing API key`**:通常是代理移除了鉴权头。升级 `nmem`,或手动使用 query 回退验证。 * **`429 Too many invalid auth attempts`**:错误 key 被连续重试。重新复制 key 或点击 **Rotate**。 # 搜索与相关性 (/docs/zh/search-relevance) import { Callout } from 'fumadocs-ui/components/callout'; 搜索由多信号评分、时间衰减和反馈循环驱动。下面逐一说明。 评分管道 [#评分管道] 搜索时,Nowledge Mem 不只匹配关键词,而是综合多个信号来排列结果。 Nowledge Mem 评分管道 语义评分 [#语义评分] 此轨道查找与你要找的内容匹配的记忆: * **基于含义的搜索**:按语义相似性查找记忆,而不仅仅是精确词语。搜索"设计模式"并找到关于"架构方法"的记忆。 * **关键词搜索**:使用 BM25 排序捕获精确短语和技术术语。 * **标签匹配**:浮现带有匹配标签的记忆。 * **图遍历**:通过实体和主题社区发现连接的记忆。 衰减与时间评分 [#衰减与时间评分] 此轨道根据新鲜度和你的使用情况调整结果: * **时效性**:最近访问的记忆得分更高。我们使用大约30天半衰期的指数衰减。 * **频率**:你反复访问的记忆变得更加持久(对数缩放,收益递减)。 * **重要性底线**:高重要性记忆即使未使用也保持最低可访问性。 * **时间匹配**:提升事件时间与你查询匹配的记忆(仅深度模式)。 这些轨道组合成决定结果排序的最终分数。 记忆衰减 [#记忆衰减] 记忆会随时间自然消退,使用即强化。 工作原理 [#工作原理] **时效性**:昨天访问的记忆得分比三个月前的要高得多。30天的半衰期意味着如果没有访问,分数大约每月减半。 **频率**:第 10 次访问比第 100 次更重要。早期重复建立持久性,后续收益递减。 **重要性底线**:高重要性记忆永远不会完全衰减。即使长期未访问,也保持最低可达性,防止基础知识丢失。 这意味着什么 [#这意味着什么] * 活跃的知识保持新鲜 * 旧记忆不会消失,它们只是在同样相关时排名更低 * 无论访问模式如何,重要知识都会持续存在 * 系统自动从你的行为中学习 时间理解 [#时间理解] Nowledge Mem 理解两种时间。 事件时间 vs 记录时间 [#事件时间-vs-记录时间] **事件时间**是事情实际发生的时间: * "2020年的产品发布" * "上季度的决定" * "在我们迁移之前" **记录时间**是你保存记忆的时间。你今天可能记录一条关于2020年事件的记忆。 这对于像"关于2020年事件的最近记忆"这样的查询很重要:你最近保存的东西(记录时间)关于2020年的事件(事件时间)。 时间意图检测 [#时间意图检测] 时间意图检测需要深度模式搜索。在快速模式下,时间引用仅按关键词匹配。 在深度模式下,系统解释时间引用: | 查询 | 理解 | | -------------- | ------------- | | "2023年的决定" | 事件时间:2023 | | "最近的记忆" | 记录时间:最近 | | "关于2020年的最近记忆" | 事件:2020,记录:最近 | | "迁移之前" | 事件:在那个事件之前 | 模糊引用如"上季度"、"大约2020年"或"今年初"被转换为有意义的过滤器。 日期精度 [#日期精度] 当你保存关于"2020年初"的记忆时,系统: 1. 规范化为可搜索的日期(2020-01-01) 2. 跟踪精度级别(年、月或日) 3. 保留原始含义以实现准确匹配 这让"2020年的记忆"(年精度)与"2020年1月的记忆"(月精度)工作方式不同。 反馈循环 [#反馈循环] 你的使用模式持续改进搜索相关性。 我们跟踪什么 [#我们跟踪什么] | 信号 | 捕获的内容 | | -------- | ----------- | | **展示次数** | 记忆在结果中出现的频率 | | **点击** | 当你打开记忆查看详情时 | | **停留时间** | 你花多长时间阅读 | 如何改进搜索 [#如何改进搜索] * 高点击率表明记忆确实有用 * 长停留时间表明内容有价值 * 频繁展示但没有点击可能表明相关性下降 无需任何操作,正常使用即可。 图驱动的发现 [#图驱动的发现] 知识图谱通过实体连接扩展搜索范围。 记忆如何连接 [#记忆如何连接] 每条记忆可以链接到: * **实体**:提到的人员、概念、技术、地点 * **其他记忆**:通过共享实体或关系 * **社区**:图分析检测到的主题集群 通过连接搜索 [#通过连接搜索] **实体介导**:即使标记不同,也能通过共享实体如 PostgreSQL 或索引找到关于"数据库优化"的记忆。 **社区介导**:关于"认证"的搜索可能会浮现你"安全实践"社区的记忆。 **图扩展**:从一条记忆开始,探索连接的知识。 搜索模式 [#搜索模式] 所有界面都有两种模式可用: 快速模式 [#快速模式] * 通常不到100毫秒响应 * 直接语义和关键词匹配 * 实体和社区搜索,无需语言模型分析 * 最适合快速查找 深度模式 [#深度模式] * 完整的语言模型分析 * **时间意图检测**(例如,"最近在做的;过去十年的社交活动") * 查询扩展以获得更好的召回率 * 上下文感知的策略加权 * 更适合探索性搜索 两种模式都适用于主搜索、全局启动器和 API。 结果透明度 [#结果透明度] 每条结果都附带排序原因。 搜索查询详情 [#搜索查询详情] 每次搜索后,你可以查看你的查询如何被解释的详细分析: * 使用了哪些搜索策略 * 时间意图检测结果(在深度模式下) * 查询扩展和实体提取 分数分解 [#分数分解] 悬停在任何结果的分数上查看它是如何计算的分解: * **语义分数**:内容与你的查询匹配程度 * **衰减分数**:基于时效性和频率的新鲜度 * **时间提升**:事件时间相关性(适用时) * **图信号**:实体和社区连接 搜索查询详情 这帮助你理解使用模式如何影响排序,以及某条记忆为什么会出现在特定查询中。 # Linux 服务器部署 (/docs/zh/server-deployment) import { Step, Steps } from 'fumadocs-ui/components/steps'; import { Tab, Tabs } from 'fumadocs-ui/components/tabs'; Nowledge Mem 可以在没有图形界面的 Linux 服务器上以**无头模式**运行。安装相同的 `.deb` 或 `.AppImage` 包,然后通过命令行管理一切。 后台智能功能(每日简报、洞察检测、知识图谱丰富)需要 [Pro 许可证](/zh/pricing)。服务器本身可在免费版上运行,限制为 20 条记忆。 系统要求 [#系统要求] | 要求 | 规格 | | ------------ | ------------------------------------------------------ | | **操作系统** | Ubuntu 22.04+、Debian 12+、RHEL 9+ 或兼容版本 | | **架构** | x86\_64 | | **内存 (RAM)** | 最低 8 GiB(推荐 16 GiB) | | **磁盘空间** | 10 GiB 可用空间 | | **依赖** | `libgtk-3-0`、`libwebkit2gtk-4.1-0`、`zstd`(`.deb` 自动安装) | 安装 [#安装] ```bash # 安装包 sudo dpkg -i nowledge-mem_*.deb # 修复缺失的依赖 sudo apt-get install -f ``` `.deb` 安装后脚本自动完成以下操作: * 解压内置的 Python 运行时 * 在 `/usr/local/bin/nmem` 创建 `nmem` CLI * 设置桌面启动项(在无头服务器上可忽略) ```bash # 添加可执行权限 chmod +x Nowledge_Mem_*.AppImage # 首次运行以解压 Python 运行时 ./Nowledge_Mem_*.AppImage --appimage-extract # 首次运行后 nmem CLI 可用 # 路径: ~/.local/bin/nmem ``` 验证 CLI 可用: ```bash nmem --version ``` 快速开始 [#快速开始] 启动服务器 [#启动服务器] ```bash nmem serve ``` 此命令在**前台**运行服务器(按 Ctrl+C 停止)。服务器默认在 `0.0.0.0:14242` 上启动。通过参数自定义: ```bash nmem serve --host 127.0.0.1 --port 8080 ``` 生产环境建议使用 `nmem service install`。它会设置一个**后台 systemd 服务**,开机自动启动。参见下方[作为 systemd 服务运行](#作为-systemd-服务运行)。 激活许可证 [#激活许可证] ```bash nmem license activate <许可证密钥> <邮箱> nmem license status # 验证激活状态 ``` 下载嵌入模型 [#下载嵌入模型] ```bash nmem models download nmem models status # 验证安装 ``` 下载用于混合搜索的嵌入模型(约 500 MB),只需下载一次。 配置 LLM 提供商 [#配置-llm-提供商] Linux 上需要远程 LLM(不支持本地 LLM): ```bash nmem config provider set anthropic \ --api-key sk-ant-xxx \ --model claude-sonnet-4-20250514 nmem config provider test # 验证连接 ``` 支持的提供商:`anthropic`、`openai`、`ollama`、`openrouter` 以及 OpenAI 兼容端点。 启用后台智能 [#启用后台智能] ```bash nmem config settings set backgroundIntelligence true nmem config settings set autoDailyBriefing true ``` 验证所有配置 [#验证所有配置] ```bash nmem status ``` 作为 systemd 服务运行 [#作为-systemd-服务运行] 生产部署建议使用 `nmem service install` 设置后台 systemd 服务,开机自动启动: ```bash # 一键安装、启用并启动 sudo nmem service install # 自定义主机/端口 sudo nmem service install --host 0.0.0.0 --port 8080 ``` ```bash # 无需 root 权限 nmem service install --user ``` 管理服务 [#管理服务] ```bash nmem service status # 查看服务状态 nmem service logs -f # 跟踪服务日志 nmem service stop # 停止服务 nmem service start # 启动服务 nmem service uninstall # 停止、禁用并删除服务 ``` 如果安装的是用户级服务,请在任何 `nmem service` 命令后添加 `--user`。 serve 与 service 的区别 [#serve-与-service-的区别] | | `nmem serve` | `nmem service install` | | -------- | ------------ | ------------------------- | | **运行方式** | 前台(当前终端) | 后台(systemd) | | **何时停止** | Ctrl+C 或关闭终端 | `nmem service stop` 或系统关机 | | **开机自启** | 否 | 是(自动启用) | | **适用场景** | 测试、开发 | 生产部署 | 远程访问 [#远程访问] 默认情况下,服务器监听所有网络接口(`0.0.0.0`)。从其他机器访问: ```bash # 在远程机器上安装 nmem-cli export NMEM_API_URL=http://你的服务器:14242 nmem status nmem m search "查询内容" ``` 在远程机器上安装独立 CLI: ```bash pip install nmem-cli # 或 uv pip install nmem-cli ``` 服务器不包含身份验证。生产环境中,请通过防火墙规则限制访问,或绑定到 `127.0.0.1` 并使用 SSH 隧道或带认证的反向代理。 交互式 TUI [#交互式-tui] 使用 TUI 获得交互式终端体验: ```bash nmem tui ``` TUI 提供完整的设置管理界面,包括许可证激活、LLM 配置和知识处理开关。 配置参考 [#配置参考] 环境变量 [#环境变量] | 变量 | 默认值 | 描述 | | ----------------------- | ------------------------ | ------------ | | `NMEM_API_URL` | `http://127.0.0.1:14242` | CLI 命令的服务器地址 | | `NOWLEDGE_DB_PATH` | 自动检测 | 覆盖数据库位置 | | `NOWLEDGE_BACKEND_HOST` | `0.0.0.0` | 服务器绑定地址 | CLI 命令摘要 [#cli-命令摘要] | 命令 | 描述 | | -------------------------------------------- | ---------------- | | `nmem serve` | 在前台启动服务器 | | `nmem service install` | 安装并启动 systemd 服务 | | `nmem service status` | 查看 systemd 服务状态 | | `nmem service logs -f` | 跟踪服务日志 | | `nmem service stop` / `start` | 停止或启动服务 | | `nmem service uninstall` | 删除 systemd 服务 | | `nmem status` | 检查服务器状态 | | `nmem license activate ` | 激活许可证 | | `nmem models download` | 下载嵌入模型 | | `nmem config provider set

--api-key ` | 配置 LLM 提供商 | | `nmem config provider test` | 测试 LLM 连接 | | `nmem config settings` | 显示处理设置 | | `nmem config settings set ` | 更新设置 | | `nmem tui` | 交互式终端 UI | 下一步 [#下一步] * **[CLI 参考](/docs/zh/cli)** - 完整的 CLI 文档 * **[API 参考](/docs/zh/api)** - REST API 端点 * **[集成](/docs/zh/integrations)** - 连接 AI 工具 # 故障排除 (/docs/zh/troubleshooting) import { Button } from "@/components/ui/button" import { Loader2, Trash2, AlertTriangle, Lightbulb, MessageSquare } from "lucide-react" import { Card, CardContent } from "@/components/ui/card" import { formatSize } from "@/lib/utils" import { Github } from "@lobehub/icons" import { Tabs, Tab, TabsList, TabTrigger, TabContent } from "fumadocs-ui/components/tabs" export const ClearCacheButton = () => ( ) 查看日志 [#查看日志] 在 macOS 上,系统日志文件位于 `~/Library/Logs/Nowledge\ Graph/app.log`。 你可以在终端中运行此命令查看: ```bash open -a Console ~/Library/Logs/Nowledge\ Graph/app.log ``` 在 Windows 上,系统日志文件位于两个可能的位置,取决于安装方法: * `%LOCALAPPDATA%\Packages\NowledgeLabsLLC.NowledgeMem_1070t6ne485wp\logs\app.log`(从 Microsoft Store 安装) * `%LOCALAPPDATA%\NowledgeGraph\logs\app.log`(从 Nowledge Mem 网站下载的安装包安装) 你可以将此粘贴到文件资源管理器的地址栏中查看: ```shell %LOCALAPPDATA%\Packages\NowledgeLabsLLC.NowledgeMem_1070t6ne485wp\logs\app.log ``` 或者: ```shell %LOCALAPPDATA%\NowledgeGraph\logs\app.log ``` 应用启动时间过长 [#应用启动时间过长] **症状:** 应用在启动期间挂起或显示超时错误。 **解决方案:** 全局代理或 VPN 软件可能阻止应用直接访问 `http://127.0.0.1:14242`。 配置你的代理或 VPN 工具绕过 localhost 地址。将以下内容添加到你的绕过/排除规则: ``` 127.0.0.1, localhost, ::1 ``` 这允许你保持代理/VPN 启用,同时确保 Nowledge Mem 可以与其本地服务器通信。更新绕过规则后,重启 Nowledge Mem。 AI Now 会话启动失败 [#ai-now-会话启动失败] **症状:** 点击 **新任务** 或恢复已暂停任务时失败,AI Now 无法打开会话。 **第一步:** 先查看 AI Now 内的启动诊断卡片。 会话启动失败时,AI Now 会显示诊断卡片,包含: * 失败阶段(`spawn`、`initialize` 或 `new_session`) * 平台和进程退出码 * 启动脚本最近的 `stderr` 输出 * 可复制的诊断信息按钮 点击 **详情** 展开技术字段,再点击 **复制诊断信息**,用于反馈或提交 issue。 **常见修复(尤其 Windows):** 1. 确认安装完整(嵌入式 Python 与启动脚本存在)。 2. 修改插件或模型配置后,重启 Nowledge Mem 再重试。 3. 临时关闭会拦截 bundled Python / PowerShell 启动的杀毒或隔离规则。 4. 若与插件有关,在 **AI Now → 插件** 中重新连接已过期 OAuth 插件后再试。 如果仍失败,请在反馈时附上“复制诊断信息”内容和 `app.log`。 模型缓存损坏 [#模型缓存损坏] **症状:** 搜索、记忆提炼或知识提取功能意外停止工作。 **解决方案:** 清除模型缓存并重新下载模型。 导航到 设置模型,然后点击: 清除缓存后,重新下载所需的模型。 找不到 CLI [#找不到-cli] **症状:** 在终端中运行 `nmem` 返回"command not found"。 **各平台解决方案:** * **macOS**:从 **设置 → 偏好设置 → 开发者工具** 安装 CLI * **Windows**:应用安装后打开**新的**终端窗口(PATH 更新需要新会话) * **Linux**:CLI 包含在 deb/rpm 包中。如果手动安装,确保 `/usr/local/bin` 在你的 PATH 中 **快速检查:** 运行 `nmem status` 以验证 CLI 可以连接到 Nowledge Mem。 远程访问返回 429 [#远程访问返回-429] **症状:** `nmem status` 或 `curl` 返回 `429 Too many invalid auth attempts`。 **解决方案:** 客户端多次使用了错误的 API key。 * 在 **设置 → 随处访问 Mem** 重新复制 URL + key * 确认 `NMEM_API_KEY` 完整且没有多余空格/引号 * 如果不确定,点击 **Rotate** 生成新 key 完整流程见:[随处访问 Mem](/zh/docs/remote-access)。 远程访问返回 401 Missing API key [#远程访问返回-401-missing-api-key] **症状:** Tunnel URL 可访问,但 `nmem status` 或 `curl` 返回 `401 Missing API key`。 **原因:** 某些网络代理会移除鉴权头。 **解决方案:** * 升级到最新版 `nmem`(会自动使用代理兼容回退) * 在 **设置 → 随处访问 Mem** 重新复制 URL + key * 手动 `curl` 可用: `curl "$NMEM_API_URL/health?nmem_api_key=$NMEM_API_KEY"` 报告问题 [#报告问题]

# 试试这些 (/docs/zh/try-these) import { Callout } from 'fumadocs-ui/components/callout'; 时间线输入处理一切:问题、知识捕获、URL、文件、定时任务。自然输入,AI 自动判断类型。以下是展示系统真正能力的查询。 这些查询随着知识增长越来越强大。持续使用一周后,结果会让你惊喜。 查询列表 [#查询列表] 1. 展示今日焦点简报 [#1-展示今日焦点简报] 读取 `~/ai-now/memory.md` 中的焦点面板。活跃主题、需要关注的事项、近期活动摘要。连接的 AI 工具(Claude Code、Cursor)会自动读取。 2. 哪些想法变化最大? [#2-哪些想法变化最大] 找到最长的 EVOLVES 链,经历了多次修订的想法。按时间讲述故事:"一月你决定用 PostgreSQL。三月,你开始考虑混合方案。最新笔记确认迁移到双数据库架构。" 3. 从我的笔记中结晶了哪些智慧? [#3-从我的笔记中结晶了哪些智慧] 展示合成的"知识精华",系统在夜间从多条相关记忆中提炼的参考文章。这些是你从单条笔记中无法获得的洞察。 4. 总结我最近的编程对话 [#4-总结我最近的编程对话] 如果你使用 Claude Code、Cursor 或 Codex,会话会自动同步。列出并总结最新的编程会话:讨论了什么、构建了什么、做了哪些决策。 5. 刚决定主数据库用 PostgreSQL [#5-刚决定主数据库用-postgresql] 知识捕获。系统保存为记忆,搜索相关决策,并提及关联:"这与你之前关于数据库扩展的笔记相关。"自然输入即可,AI 自动分类并存储。 6. 记一下 https://example.com/interesting-article [#6-记一下-httpsexamplecominteresting-article] 粘贴 URL,系统会抓取、解析并索引内容。AI 阅读页面并将实质性摘要存储为记忆。URL 及其内容变得可搜索。在 URL 前加注释,AI 会同时捕获两者。 7. 今晚对我最近的记忆执行知识图谱抽取 [#7-今晚对我最近的记忆执行知识图谱抽取] 安排后台知识代理任务。代理在指定时间触发,拥有完整工具访问权限:可以分析记忆、检测矛盾、创建 EVOLVES 链接或生成知识精华。自然语言定时:"2小时后"、"明天早上"、"下周"。最短5分钟,最长30天。 8. 在我的文档中搜索 [主题] [#8-在我的文档中搜索-主题] 全文搜索资料库中所有源文档。将文件(PDF、Word、Markdown)拖放到时间线输入框,或通过资料库添加。文件会被解析、分块并索引,支持语义搜索。 9. 我的知识有哪些主要主题? [#9-我的知识有哪些主要主题] **注意**:需要一周的持续使用和后台处理。 社区检测将你的实体聚类为主题领域,附带 AI 摘要。系统执行夜间分析,将相关概念分组。你会看到从未有意识追踪的主题:一个你不知道存在的"开发者体验"聚类,或贯穿数月笔记的"数据架构"主题。 复合效应 [#复合效应] 这些查询随时间推移越来越强大: * **第 1 周**:基本搜索可用。社区较小或为空。 * **第 1 月**:演变链出现。知识精华开始形成。主题浮现。 * **第 3 月**:跨领域连接让你惊喜。每日简报真正有用。 * **第 6 月**:系统比你自己更了解你的专业知识。 下一步 [#下一步] * [快速开始](/docs/zh/getting-started):五分钟完成设置 * [查看你的专业知识](/docs/zh/use-cases/expertise-graph):可视化探索知识图谱 * [后台智能](/docs/zh/advanced-features):系统如何在夜间学习 # 使用 Nowledge Mem (/docs/zh/usage) import VideoPlayer from "@/components/ui/video-player" import { Step, Steps } from 'fumadocs-ui/components/steps'; Timeline [#timeline] Timeline 是你的主页。你捕获的、你提问的、系统自己发现的,都在一个流里。 Nowledge Mem Timeline 在顶部输入框里写任何东西。AI 判断你的意图并执行。一段想法变成记忆,一个问题从知识中回答,一个 URL 被抓取索引,一个文件被解析。 你会看到什么 [#你会看到什么] Timeline 中会出现这些类型: | 项目 | 说明 | | ------------------ | ------------------- | | **Capture** | 你保存的记忆,自动生成标题和标签 | | **Question** | 你的提问和 AI 基于知识库给出的回答 | | **URL Capture** | 抓取、解析并存储的网页 | | **Insight** | 系统在你的记忆之间发现的关联 | | **Crystal** | 对多条相关记忆的综合提炼 | | **Flag** | 矛盾信息、过时内容或需要验证的观点 | | **Working Memory** | 你的每日晨间简报 | 不需要手动整理。 你的 AI 工具 [#你的-ai-工具] 任何工具都能连接到你的知识。Claude Code、Cursor、Codex、OpenCode、Alma,或者你下周要换的新工具。 **没有 Mem:** *"帮我给 API 加缓存。"* 智能体问你用什么技术栈、什么基础设施、什么偏好。你从头解释一遍。 **有 Mem:** *"帮我给 API 加缓存。"* 智能体搜索你的知识,找到上个月的 Redis 决策和 API 限流方案,直接写出符合你架构的代码。零提问。 连接后,工具识别到你的知识库,在需要时自动使用。
今天在 Claude Code 中保存一条洞察,明天 Cursor 遇到同一主题时自动找到。不需要导出,不需要复制。 你也可以直接询问 Agent:*"上个月我对数据库迁移做了什么决定?"*,它会搜索你的知识来回答。 详见[集成](/zh/docs/integrations)了解设置说明。 搜索 [#搜索] 应用内 [#应用内] 按 Cmd + K(macOS)打开记忆搜索。搜索理解语义,不仅仅是关键词。搜索"设计模式"会找到关于"架构方法"的记忆。 记忆搜索 三种搜索模式协同工作: * **语义搜索**: 按含义查找记忆 * **关键词搜索**: 精确匹配特定术语 * **图搜索**: 通过连接和关系发现记忆 全局搜索 [#全局搜索] 全局启动器让你无需打开 Nowledge Mem 就能搜索。在任何应用中按 Cmd + Shift + K,搜索后直接将结果粘贴到需要的地方。如果你使用 [Raycast](https://raycast.com),[Nowledge Mem 扩展](/zh/docs/integrations#raycast)可以将同样的搜索直接带入你的启动器。 Memory Search Launcher
AI Now [#ai-now] AI Now 是运行在你本地的个人 AI 智能体。它拥有你的完整知识库、连接的笔记和网络。精心打造的能力——不只是聊天: * **深度研究**:同时搜索你的记忆和网络,综合分析 * **文件分析**:在上下文中理解你的数据——"和上季度比有什么变化"之所以能回答,是因为它知道上季度 * **演示文稿**:实时预览,导出 PowerPoint * **插件**:Obsidian、Notion、Apple Notes 和任何 MCP 服务 当你问缓存方案时,它已经知道你上个月的 Redis 决策。当你分析数据时,它把数字和你的目标、历史关联起来。每一项能力都建立在你的知识之上。 AI Now 需要远程 LLM。详见 [AI Now](/zh/docs/ai-now) 完整指南。 命令行 [#命令行] `nmem` CLI 让你从任何终端获得完整访问: ```bash # 搜索你的记忆 nmem m search "authentication patterns" # 添加记忆 nmem m add "We chose JWT with 24h expiry for the auth service" # JSON 输出用于脚本 nmem --json m search "API design" | jq '.memories[0].content' ``` 详见 [CLI 参考](/zh/docs/cli)获取完整命令集。 远程 LLM [#远程-llm] 默认在本地运行,不需要联网。知识库增长后,远程 LLM 能给你更强的处理能力。 远程 LLM 配置需要 [Pro 许可证](/zh/docs/mem-pro)。 **解锁的功能:** * **后台智能**:自动发现关联、生成 Crystal、产出 Insight 以及每日简报 * 更快的知识图谱提取 * 更细腻的语义理解 * AI Now Agent 能力 **隐私:** 你的数据仅发送到你选择的 LLM 提供商,永远不会发送到 Nowledge Mem 服务器。你可以随时切换回纯本地模式。 前往 **Settings > Remote LLM** 开启 **Remote** 模式 选择你的 LLM 提供商并输入 API 密钥 测试连接,选择模型,保存 Remote LLM 下一步 [#下一步] * **[AI Now](/zh/docs/ai-now)**: 基于你的知识进行深度研究和分析 * **[后台智能](/zh/docs/advanced-features)**: 你的知识如何自动成长:知识图谱、Insight、Crystal、Working Memory * **[集成](/zh/docs/integrations)**: 连接你所有的 AI 工具 # List Communities (/docs/api/communities/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} List knowledge communities with AI summaries. # List Entities (/docs/api/entities/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} List entities with optional filtering. # Health Check (/docs/api/health/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Health check endpoint. # List Labels (/docs/api/labels/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} List all labels with usage counts. # Create Label (/docs/api/labels/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Create a new label. # List Memories (/docs/api/memories/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} List memories with filtering and pagination. # Create Memory (/docs/api/memories/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Create a new memory with automatic entity extraction. # List Sources (/docs/api/sources/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} List sources with optional filtering and pagination. # List Threads (/docs/api/threads/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} List threads with filtering and pagination. # Create Thread (/docs/api/threads/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Create a new thread with messages. # OpenClaw × Nowledge Mem (/docs/zh/integrations/openclaw) import { Step, Steps } from 'fumadocs-ui/components/steps'; 配置完成后,你的 OpenClaw 会记住你在上一次会话说的话,记住你上周做的决定,记住你三个月前写入文档的知识。 开始之前 [#开始之前] 需要准备: * 本地已运行 Nowledge Mem([安装](/zh/docs/installation)) * 已安装 OpenClaw([OpenClaw 入门](https://docs.openclaw.ai/start/openclaw)) * `nmem` CLI 在你的 PATH 中 ```bash nmem status # 应显示 Nowledge Mem 正在运行 openclaw --version ``` 配置步骤 [#配置步骤] 安装插件 ```bash openclaw plugins install @nowledge/openclaw-nowledge-mem ``` 在 OpenClaw 配置中启用插件 打开 `~/.openclaw/openclaw.json`,加入以下内容: ```json { "plugins": { "slots": { "memory": "openclaw-nowledge-mem" }, "entries": { "openclaw-nowledge-mem": { "enabled": true, "config": { "autoRecall": true, "autoCapture": false, "maxRecallResults": 5 } } } } } ``` 重启 OpenClaw,验证生效 ```bash openclaw nowledge-mem status ``` 看到 Nowledge Mem 可访问即配置成功。 验证配置(1 分钟) [#验证配置1-分钟] 在 OpenClaw 聊天中依次执行: 1. `/remember 我们为任务事件选择了 PostgreSQL` 2. `/recall PostgreSQL` — 应立即找到 3. `/new` — 开启新会话 4. 问:`任务事件的数据库我们选的什么?` — 跨会话记住了 5. 问:`这周我都做了什么?` — 按周浏览 6. 问:`2月17日我在忙什么?` — 精确到某一天 7. `/forget PostgreSQL 任务事件` — 删除干净 如果以上七步都顺利,记忆系统已完整运作。 你能做什么 [#你能做什么] **记住任何事情** 告诉 AI `/remember 我们决定不用微服务,原因是团队太小`,下周换一个会话,直接问"微服务那个决定是怎么说的",它能找到。 **按日期找回工作内容** 问"上周二我在做什么",AI 会列出那天你保存的内容、添加的文档、生成的洞察。支持指定具体日期,不只是"最近 N 天"。 **追溯一个决策的来龙去脉** 问 AI "这条记忆是怎么来的、和什么有关",它会展示:这条知识的原始来源文档、哪些相关记忆被合成为了更高层的洞察、这个认识随时间怎么变化过。 **每天自动带着上下文开始** 每天早上,Nowledge Mem 的知识智能体生成一份今日简报:你在关注什么、有什么未解决的问题、最近有什么新进展。会话开始时 AI 自动读取,不需要你每次重新介绍背景。 **保存时带上类型和时间** 你不只是在保存文字,你在记录结构化的知识。告诉 AI "记住这是一个决策,发生在 2026 年 2 月",它会以正确的类型和时间存进知识图谱。支持 8 种类型:事实、偏好、决策、计划、流程、学习、背景、事件。 **斜杠命令快捷方式**:`/remember`、`/recall`、`/forget` 自动记忆的工作方式 [#自动记忆的工作方式] `autoRecall` 和 `autoCapture` 都是通过插件生命周期钩子在后台运行的——它们不是 AI 做出的决定,AI 不会在后台调用某个隐藏的"保存"工具。插件代码在特定时刻触发,与 AI 的行为无关。 autoRecall — 会话开始时发生什么 [#autorecall--会话开始时发生什么] 在 AI 看到你的消息之前,插件会悄悄地: 1. 读取你的**工作记忆**(Knowledge Agent 每天早上生成的今日简报:你在关注什么、有什么未解决的问题、最近有什么新进展) 2. 根据你当前的消息,在知识图谱里**搜索相关记忆** 3. 将上述内容以隐式上下文的形式插入系统提示,同时附带 Nowledge Mem 工具的使用指引 AI 一开始就已经了解你的背景,不需要你每次重新介绍。 autoCapture — 会话结束时发生什么 [#autocapture--会话结束时发生什么] 默认情况下,AI 只在你主动要求时才保存(`autoCapture: false`)。开启自动保存: ```json "autoCapture": true ``` 每次会话结束时(以及上下文压缩和重置时),**会有两件独立的事情发生**: **1. 完整对话会被保存为一个 Thread。** 你和 AI 的所有消息都会被追加到 Nowledge Mem 里一个与本次会话绑定的 Thread 中。这是无条件发生的——只要会话正常结束,不管说了什么都会保存。你可以用 `nowledge_mem_timeline` 按时间浏览这些对话,也可以从任何工具中搜索。 **2. 可能会提取一条记忆。** 如果你最后一条消息包含决策、偏好或陈述性事实——比如"我倾向于用 TypeScript"或"我们决定不用微服务"——插件还会额外创建一条结构化记忆。疑问句、过短的消息和斜杠命令会被跳过。这条记忆与 Thread 是独立的,两者可能都有,也可能只有其一,或者都没有。 **上下文压缩** 是指 OpenClaw 为了让对话适配模型上下文窗口而对长对话进行压缩的过程。插件会在压缩发生时捕获对话记录,被压缩掉的消息不会丢失,仍然会进入你的知识库。 消息会自动去重——就算插件在会话结束和重置时都触发了,Nowledge Mem 里也不会出现重复内容。 在多台机器上使用 [#在多台机器上使用] 如果你的 OpenClaw 运行在另一台机器或服务器上,在插件配置中填入 Nowledge Mem 的地址: ```json "apiUrl": "https://your-nowledge-mem-url", "apiKey": "nmem_..." ``` 或者通过环境变量: ```bash export NMEM_API_URL="https://your-nowledge-mem-url" export NMEM_API_KEY="nmem_..." ``` API 密钥只在内部传递,不会出现在日志或命令行历史中。详见:[随处访问 Mem](/zh/docs/remote-access)。 遇到问题? [#遇到问题] **插件装了,但 OpenClaw 好像没在用它** 检查 `plugins.slots.memory` 的值是否正好是 `openclaw-nowledge-mem`,确认修改配置后重启了 OpenClaw。 **status 显示无法连接** ```bash nmem status curl -sS http://127.0.0.1:14242/health ``` **搜索只找到一两条结果** 把 `maxRecallResults` 调高到 `8` 或 `12`。 为什么用 Nowledge Mem 而不是其他方案? [#为什么用-nowledge-mem-而不是其他方案] 其他记忆工具把你说过的话存成一段段文字,靠语义相似度找回来。Nowledge Mem 不一样。 **知识是有结构的。** 你保存的每条记忆都知道自己是什么类型——决策、学习、计划还是偏好——知道它什么时候发生、指向哪些来源文档、和哪些其他记忆有关联。这让搜索更准、推理更靠谱。 **知识会演化。** 你今天写的理解,和三个月后更新过的认识,在系统里是连在一起的。你可以看到自己的想法怎么变化的,不会丢掉中间的过程。 **知识来自哪里是透明的。** 从 PDF、文档或网页提取的每条知识,都保留着指向原始文件的链接。AI 告诉你"根据你三月份的设计文档",你能直接验证。 **跨工具共享。** 在 Cursor 里学到的,在 Claude 里记下的,在 OpenClaw 里一样能用。你的知识不属于任何一个工具,它属于你。 **本地优先,无需云账户。** 你的知识存在本地。远程访问是可选的,不是必须的。 搜索怎么工作的?参见[搜索与相关性](/zh/docs/search-relevance)。 给进阶用户 [#给进阶用户] OpenClaw 的 `MEMORY.md` 工作区文件仍然有效,但记忆工具的实际调用全部由 Nowledge Mem 处理。两者可以共存。 插件通过 `nmem` CLI 子进程与 Nowledge Mem 通信。这意味着本地和远程模式的行为完全一致,配置好地址后不需要其他改动。 参考 [#参考] * 插件源码:[nowledge-mem-openclaw-plugin](https://github.com/nowledge-co/community/tree/main/nowledge-mem-openclaw-plugin) * OpenClaw 文档:[插件系统](https://docs.openclaw.ai/tools/plugin) * 更新日志:[CHANGELOG.md](https://github.com/nowledge-co/community/blob/main/nowledge-mem-openclaw-plugin/CHANGELOG.md) # 穿越时间搜索 (/docs/zh/use-cases/bi-temporal) import VideoPlayer from "@/components/ui/video-player" import { Step, Steps } from 'fumadocs-ui/components/steps'; 问题所在 [#问题所在] 董事会问:*"为什么你在第一季度选择了 React Native 而不是 Flutter?"* 你记得那个决定。但你记得的是通过之后发生的一切的镜头:转型、性能问题、重写。 你需要回答:**你当时知道什么?** > "我可以搜索我的笔记中的'React Native'。但我不能搜索'我在三月份对 React Native 的看法'。" 解决方案 [#解决方案] Nowledge Mem 使用**双时态搜索**:两个时间维度让你准确找到你要找的东西。 双时态搜索 **事件时间**:事情实际上是什么时候发生的? **记录时间**:你什么时候捕获的? 可以单独搜索,也可以组合使用。 搜索查询详情 博客:[我们如何教会 Nowledge Mem 遗忘](https://nowledge-labs.ai/blog/memory-decay-temporal)。 关于[搜索与相关性](/zh/docs/search-relevance)的文档。 工作原理 [#工作原理] 自然语言查询 [#自然语言查询] 只需自然地搜索。Nowledge Mem 理解时间意图: > "我在 2024 年第一季度对 React Native 做了什么决定?" 系统: 1. 检测时间意图:"2024 年第一季度" 2. 搜索**事件**发生在该期间的记忆 3. 返回带有原始上下文的结果 不需要特殊语法。 显式时间过滤器 [#显式时间过滤器] 对于精确控制,使用高级搜索: | 过滤器 | 含义 | 示例 | | --------- | -------- | ---------- | | **事件日期从** | 事件发生在此之后 | 2024-01-01 | | **事件日期到** | 事件发生在此之前 | 2024-03-31 | | **记录日期从** | 写下在此之后 | 2024-01-01 | | **记录日期到** | 写下在此之前 | 2024-12-31 | **强大查询示例:** > 事件时间:2024 年 3 月 > 记录时间:任何 返回:*"所有关于 2024 年 3 月事件的记忆,无论你什么时候记录的。"* 灵活的日期精度 [#灵活的日期精度] Nowledge Mem 处理灵活的日期: * **年**:"2024" -> 匹配 2024 年的任何内容 * **月**:"2024-03" -> 匹配 2024 年 3 月 * **日**:"2024-03-15" -> 匹配那个特定日期 系统保留你的原始精度并相应显示。 知识演化 [#知识演化] 双时态搜索与知识演化结合更加强大。后台智能自动检测你对某个话题的想法变化: **周二**:你保存了"新服务用 PostgreSQL。" **周四**:你提到 CockroachDB 作为迁移目标。 **周五**:后台智能用 EVOLVES 关系链接它们,标记出张力。 现在搜索"数据库决策",你不只是得到孤立的记忆。你得到**演化链**:原始决策、更新,以及它们之间的关系。你能准确看到你的思维何时、如何改变。 演化类型: * **替换**:新信息使旧信息过时 * **丰富**:新信息为旧信息添加细节 * **确认**:来自不同来源的相同结论 * **挑战**:矛盾的信息,标记待审查 实际示例 [#实际示例] 董事会回顾 [#董事会回顾] > **查询**:"2024 年第一季度的架构决定" > > **结果**:带有第一季度上下文的原始决策备忘录,加上展示决策如何变化的演化链 合规审计 [#合规审计] > **查询**:"事故前的安全策略" > > **结果**:违规前存在什么策略,带有证明何时记录的时间戳 项目复盘 [#项目复盘] > **查询**:"项目启动时的 project-x 假设" > > **结果**:后来被证明错误的原始假设,链接到证明它们错误的后续洞察 知识图谱 + 时间 [#知识图谱--时间] 图谱视图有一个**时间线滑块**,可以按日期范围过滤节点和边。 将范围设置为"2024 年 3 月"并查看: * 只有当时存在的实体 * 只有当时已知的连接 * 你在那个时刻的知识状态 向前拖动滑块,观察你的理解如何演变。播放动画,看知识随时间累积。 记忆衰减如何工作 [#记忆衰减如何工作] 记忆衰减遵循以下规则: * 默认**优先最近的记忆**(30 天半衰期) * **提升经常访问的**记忆(对数缩放) * **尊重重要性分数**(重要性底线防止完全衰减) * **从行为中学习**(点击、停留时间) 普通搜索会浮现新鲜、相关的结果;时间搜索则绕过衰减,精确返回你指定的时段。 时间意图检测需要**深度模式**搜索。在快速模式下,时间引用仅按关键词匹配。对于"最近在做"或"上季度的决定"等查询,启用深度模式。 查看[搜索与相关性](/zh/docs/search-relevance)了解评分、衰减和时间匹配如何工作的完整技术分解。 两种时间 [#两种时间] 理解区别是关键: | 问题 | 哪种时间? | | -------------- | ----- | | "我三月份做了什么决定?" | 事件时间 | | "我上周写了什么?" | 记录时间 | | "显示关于旧事件的最近笔记" | 两者 | | "转型前我知道什么?" | 事件时间 | 大多数搜索使用**事件时间**,因为你在问事情何时发生。 **记录时间**对以下有用: * 查找最近的捕获 * 审查你一直在记录什么 * 审计知识何时被记录 为什么这很重要 [#为什么这很重要] 传统搜索找内容。时间搜索找**上下文**。知识演化找**故事**。 > "我们用当时掌握的信息做了最好的决定。这就是证据。这里是我们的思维何时以及为何改变的完整记录。" 你的记忆带时间戳、有版本控制、历史可查。 下一步 [#下一步] * [你的知识,你做主](/zh/docs/use-cases/shared-memory) -> 自由切换工具,不丢失上下文 * [看见你的专长](/zh/docs/use-cases/expertise-graph) -> 可视化你的知识 * [后台智能](/zh/docs/advanced-features) -> 知识图谱能力 # 看见你的专长 (/docs/zh/use-cases/expertise-graph) import VideoPlayer from "@/components/ui/video-player" import { Step, Steps } from 'fumadocs-ui/components/steps'; 问题所在 [#问题所在] 你多年来积累了大量知识,但能看到它的全貌吗? > 我知道我擅长...某些东西。技术方面。但如果有人让我描述我的专长,我会很难说清楚。全凭直觉。没有具体的东西。 知识分散在记忆、笔记和对话中,模式和连接都看不见。 解决方案 [#解决方案] Nowledge Mem 将你的知识可视化为一个**活的图谱**。节点是你的记忆和实体。边是关系。图谱**自动构建**:后台智能在夜间自动从你的记忆中提取实体和关系。 运行**社区检测**,观察你的专长集群浮现: 专长图谱 工作原理 [#工作原理] 图谱自动构建 [#图谱自动构建] 你不需要手动标记或分类任何东西。后台智能读取你的记忆并提取: * **实体**:技术、人员、概念、项目 * **关系**:它们之间如何连接 * **演化链**:你对某个话题的想法如何变化 这一切自动发生。通过任何渠道保存记忆(自动同步、浏览器扩展、Timeline、`/sum`),图谱就会自行生长。 自动实体提取需要 [Pro 许可证](/zh/docs/mem-pro)和已配置的远程 LLM。 运行社区检测 [#运行社区检测] 在右侧面板中,找到**图算法**并点击**聚类**下的 计算。 Louvain 算法分析你的知识结构并找到自然集群: | 社区 | 大小 | 主题 | | ----- | ------ | ------- | | 分布式系统 | 87 条记忆 | 后端架构、扩展 | | 团队领导 | 45 条记忆 | 指导、沟通 | | 性能 | 62 条记忆 | 优化、分析 | | 个人项目 | 23 条记忆 | 创意实验 | 每个集群在其节点周围获得一个彩色"气泡"。 穿越时间 [#穿越时间] 图谱底部的**时间线滑块**允许你按日期范围过滤。 拖到"2024 年 1 月",查看你当时的知识状态。向前拖动,观察新集群形成、现有集群增长、连接增多。 播放动画,观看你的专长在数月间演变。看到新兴趣何时出现,何时与现有知识连接,何时成长为完整的集群。 探索和发现 [#探索和发现] 导航图谱: * **点击**任何节点查看其详情 * **双击**扩展邻居 * **Shift+拖动**套索选择多个节点 * **按 C** 切换社区气泡 * **按 E** 扩展所选节点的邻居 发现你从未注意到的模式: > 每条领导力记忆都链接回调试会话。我通过教调试来领导。 你将发现什么 [#你将发现什么] 专长集群 [#专长集群] 社区检测揭示你的知识自然分组的地方: * **核心优势**:大型、密集的集群 * **新兴领域**:小但正在增长的集群 * **桥梁**:连接多个集群的节点(往往是你最独特的技能) 知识演化 [#知识演化] 后台智能追踪你的思维如何变化: * **周二**:"新服务用 PostgreSQL" * **周四**:"考虑用 CockroachDB 迁移" * **周五简报**:"你的数据库选型在演变" 这些演化链在图谱中显示为链接的节点。你能准确看到你的观点在哪里发生了转变,并追踪整个过程。 隐藏模式 [#隐藏模式] 探索并发现: * 你从未有意识追踪的重复主题 * 看似无关的项目之间的连接 * 你独特的视角和方法 * 相关主题之间的差距 向 AI 询问你的图谱 [#向-ai-询问你的图谱] 查看你的图谱,让 AI Now 解释它: > 基于我的知识图谱,什么职业道路最适合我? AI Now 综合: > 你的记忆显示深度系统知识与教学能力的独特交叉。你最核心的概念(事件驱动架构、调试)连接技术和领导力集群。考虑:Staff Engineer、Developer Advocate 或具有技术重点的 Engineering Manager。 其他可尝试的问题: * "我最强的专长领域是什么?" * "我的知识差距在哪里?" * "接下来我应该探索什么主题?" * "我的重点是如何随时间变化的?" 复合效应 [#复合效应] 记忆越多,图谱越丰富,洞察越深。 **1 个月后:** > 我可以看到我的主要主题,但集群很小 **6 个月后:** > 清晰的专长领域。意外的连接正在浮现。后台智能在发现我漏掉的模式。 **1 年后:** > 我可以实际看到我的思维是如何演变的。我去年建立的连接为今年奠定了基础。 **对于绩效评估:** > 我在评估前探索了我的图谱。在每个维度都有成长的具体例子。 下一步 [#下一步] * [后台智能](/zh/docs/advanced-features) -> 图谱如何自动生长 * [你的知识,你做主](/zh/docs/use-cases/shared-memory) -> 自由切换工具,不丢失上下文 * [穿越时间搜索](/zh/docs/use-cases/bi-temporal) -> 时间查询和演化链 # 概述 (/docs/zh/use-cases) import { Cards, Card } from 'fumadocs-ui/components/card'; import { Brain, Clock, FileText, Network, MessageSquare, Sparkles } from 'lucide-react'; Nowledge Mem 从你与 AI 的一切互动中学习。它自动捕获对话,实时同步会话,构建一个夜间自动生长的知识图谱。每个连接的工具都从你的完整上下文开始。 } href="/zh/docs/use-cases/shared-memory" title="你的知识,你做主"> 告诉 Claude 一次,Cursor 就已经知道了。一个知识库,跨越你使用的每个 AI 工具。 } href="/zh/docs/use-cases/session-backup" title="永不丢失会话"> 会话实时自动同步。Claude Code、Cursor、Codex、ChatGPT -- 每段对话都被捕获。 } href="/zh/docs/use-cases/bi-temporal" title="穿越时间搜索"> 董事会问为什么选了 React Native。找到你当时相信的,而不是你现在知道的。 } href="/zh/docs/use-cases/notes-everywhere" title="你的笔记,无处不在"> Obsidian、Notion、PDF、Word 文档。一次搜索覆盖所有知识源。 } href="/zh/docs/use-cases/expertise-graph" title="看见你的专长"> 图谱从你的记忆中自动构建。社区检测揭示你不知道自己拥有的专长集群。 } href="/zh/docs/ai-now" title="AI Now"> 运行在本地的个人 AI 智能体。深度研究、文件分析、演示文稿——精心打造的能力,建立在你的完整知识之上。 三个核心变化 [#三个核心变化] **自动捕获。** 浏览器扩展从 ChatGPT、Claude、Gemini 等 13+ 个平台抓取洞察。Claude Code、Cursor、Codex 的会话实时同步。你不再需要在工具之间复制粘贴。 **睡觉时它在学习。** 后台智能检测你的思维演变,从零散的记忆综合参考文章,标记矛盾。每天早上的简报 `~/ai-now/memory.md` 在你开口之前就告诉 AI 工具你在做什么。 **它跟着你走。** 一条命令连接 20+ 个 AI 智能体。自由切换工具,知识不变。 工作原理 [#工作原理] 1. **捕获** -- 浏览器扩展、会话同步,或直接输入 Timeline 2. **连接** -- 系统将它关联到你已有的所有知识 3. **生长** -- 后台智能在夜间构建演化链、结晶和标记 4. **使用** -- 任何连接的工具在需要时自动找到 知识积累在 Mem 里,不依赖任何单个工具。 开始 [#开始] 选择上面的用例了解详情,或直接前往[快速入门](/zh/docs/getting-started)。 # 你的笔记,无处不在 (/docs/zh/use-cases/notes-everywhere) import VideoPlayer from "@/components/ui/video-player" import { Step, Steps } from 'fumadocs-ui/components/steps'; 问题所在 [#问题所在] 你多年来一直在做笔记。Obsidian。Notion。也许两个都用。 数千条记录,仔细标记,广泛链接。然而, > 我知道我写过这个。我只是找不到它。搜索没用。标签没用。 更糟的是,AI 助手根本不知道这些笔记存在,你每次都在重复解释笔记里已有的内容。 解决方案 [#解决方案] 不取代笔记应用,而是**将它接入你的知识**。 继续使用 Obsidian、Notion、Apple Notes 或 Markdown 文件夹,就像你现在做的那样。Nowledge Mem 连接到它们,让你的笔记与你的记忆一起可搜索,通过 AI Now,以及通过 MCP 的任何 AI 工具。 有了**资料库**,你还可以拖入 PDF、Word 文档和演示文稿。所有内容都从一个地方搜索。 笔记无处不在 工作原理 [#工作原理] 连接你的笔记 [#连接你的笔记] **Obsidian:** 1. 在 Nowledge Mem 中打开 AI Now 2. 前往 **插件** -> 启用 **Obsidian Vault** 3. 设置你的知识库路径(例如,`/Users/you/Documents/ObsidianVault`) 4. 完成。AI Now 现在可以搜索你的知识库 笔记无处不在 **Notion:** 1. 打开 AI Now -> **插件** -> 启用 **Notion** 2. 点击 **连接 Notion** 3. 在浏览器弹出窗口中授权访问 4. 你的工作区现在可访问 将文档导入资料库 [#将文档导入资料库] 将文件直接拖入 Timeline 输入框或打开资料库视图: | 格式 | 扩展名 | 处理方式 | | ------------ | ----------- | ----------- | | **PDF** | .pdf | 提取文本,分段,索引 | | **Word** | .docx, .doc | 解析为文本,分段,索引 | | **演示文稿** | .pptx | 提取幻灯片内容并索引 | | **Markdown** | .md | 直接解析并索引 | 索引完成后,文档内容可与你的记忆和笔记一起搜索。 跨所有内容搜索 [#跨所有内容搜索] 向 AI Now 提问任何问题: > 我的笔记关于量子计算说了什么? AI Now: 1. 搜索你的 Obsidian 知识库 2. 搜索你的 Notion 工作区 3. 搜索你的 Nowledge 记忆 4. 搜索你的资料库文档 5. 组合并综合结果 一个问题,覆盖所有知识源。 提炼成记忆 [#提炼成记忆] 找到有价值的笔记?将它们转变为永久记忆: > 从这些量子计算笔记中提炼关键洞察 AI Now 创建: * **洞察**:"量子纠错需要 O(n^2) 量子比特" * **决定**:"近期研究专注于 NISQ 算法" * **事实**:"IBM 在 2023 年 12 月宣称量子优势" 这些记忆现在: * 可通过语义理解搜索 * 在知识图谱中连接 * 可供你所有 AI 工具通过 MCP 访问 * 相关时会出现在你的工作记忆简报中 Obsidian 集成 [#obsidian-集成] 设置 [#设置] 打开 Nowledge Mem 点击 AI Now 标签 在侧边栏中前往 **插件** 找到 **Obsidian Vault** 并切换开启 输入你的知识库路径 示例:`/Users/yourname/Documents/ObsidianVault` 你可以做什么 [#你可以做什么] 连接后: * 按内容搜索笔记:*"找到我关于机器学习的笔记"* * 阅读特定笔记:*"显示我关于项目启动的笔记"* * 在上下文中引用:*"基于我关于 X 的 Obsidian 笔记,帮我..."* 你的知识库在本地读取。笔记永远不会上传到任何地方。Nowledge Mem 只是读取你机器上的文件。 Notion 集成 [#notion-集成] 设置 [#设置-1] 打开 AI Now -> **插件** 找到 **Notion** 并点击 **连接** 在浏览器弹出窗口中授权 选择你想连接的工作区 你可以做什么 [#你可以做什么-1] * 搜索你的工作区:*"找到关于季度规划的页面"* * 阅读页面内容:*"我的产品路线图页面里有什么?"* * 交叉引用:*"比较我的 Notion 笔记与我关于 X 的记忆"* * 结合公开信息和私人知识进行深度研究:*"量子计算的最新进展是什么?"* Notion 使用安全的 OAuth。你完全控制 Nowledge Mem 可以访问哪些页面。随时从 Notion 设置中撤销。 内置集成 [#内置集成] 部分工具已内置 Nowledge Mem: * **DeepChat**:在设置中开启 Nowledge Mem。你的记忆在每次对话中可用。 * **LobeHub**:从市场安装。完整 MCP 集成。 即将推出 [#即将推出] * **Apple Notes** 集成 加入[社区](/zh/docs/community)请求集成。 下一步 [#下一步] * [AI Now](/zh/docs/ai-now) -> 了解 AI Now 还能做什么 * [资料库](/zh/docs/library) -> 导入和搜索文档 * [看见你的专长](/zh/docs/use-cases/expertise-graph) -> 可视化你的知识图谱 * [集成](/zh/docs/integrations) -> 完整设置指南 # 永不丢失会话 (/docs/zh/use-cases/session-backup) import VideoPlayer from "@/components/ui/video-player" import { Step, Steps } from 'fumadocs-ui/components/steps'; 问题所在 [#问题所在] 你刚刚进行了一次史诗般的调试会话。与 Claude Code 三个小时。你发现了一个竞态条件,追踪了15个文件,构建了一个带测试的完美修复。 但 AI 对话是短暂的。上下文被压缩,token 限制到达,会话过期。200 条消息的对话线程中,早期内容已经消失了。 > "我以前解决过这个完全相同的问题。我只是不记得怎么解决的了。或者在哪里。或者什么时候。" 解决方案 [#解决方案] 你的会话自动同步。Claude Code、Cursor、Codex 和 OpenCode 的对话实时捕获。ChatGPT、Claude、Gemini 的浏览器对话由扩展抓取。不需要记命令。不需要手动导出。 准备好之后,将对话线程提炼成永久、可搜索、连接图谱的记忆。 工作原理 [#工作原理] 会话自动同步 [#会话自动同步] **Claude Code 和 Codex (npx skills):** 安装一次: ```bash npx skills add nowledge-co/community/nowledge-mem-npx-skills ``` 会话自动保存。智能体在会话结束时提炼关键洞察。 **Cursor 和 OpenCode(自动同步):** Nowledge Mem 实时监控新对话。打开**对话线程**查看它们在你工作时出现。不需要导入步骤。 **浏览器(ChatGPT、Gemini、Claude Web):** Exchange v2 扩展从 13+ 个 AI 聊天平台捕获对话。洞察在你聊天时流入 Mem。 **手动保存(任何 MCP 工具):** ``` /save -> 保存完整对话线程 /sum -> 将洞察提炼成记忆 ``` 提炼成永久知识 [#提炼成永久知识] 打开保存的对话线程并点击**提炼**。AI 阅读整个对话并提取: * **决定**:"选择滑动窗口而不是令牌桶因为..." * **洞察**:"异步回调中的竞态条件需要互斥锁" * **模式**:"测试基于时间的 bug 需要模拟时钟" * **事实**:"Redis SETNX 提供原子锁获取" 每个都成为独立的、可搜索的记忆,带有适当的标签。 后台智能自动连接 [#后台智能自动连接] 你的新记忆不会孤立存在。后台智能会: * 将它们链接到同一代码库的以前工作 * 检测它们是否更新或矛盾了早期决策 * 将它们连接到知识图谱中的相关实体 * 在第二天早上的工作记忆简报中浮现 三个月后,同事遇到同样的 bug。你的简报在他们开口之前就提到了它。 随时搜索 [#随时搜索] 三个月后,类似的 bug 出现: > 搜索:"支付竞态条件" Nowledge Mem 返回完整上下文:问题、调试步骤、解决方案、测试方法。 不再重新解决已解决的问题。 捕获来源 [#捕获来源] | 来源 | 方式 | 捕获内容 | | --------------- | ----------------------- | ------------- | | **Claude Code** | npx skills(自动)或 `/save` | 完整会话含代码上下文 | | **Codex** | npx skills(自动)或 `/save` | 完整会话含代码上下文 | | **Cursor** | 自动同步(实时监控) | 对话实时捕获 | | **OpenCode** | 自动同步(实时监控) | 对话实时捕获 | | **ChatGPT** | 浏览器扩展(自动捕获) | 网页聊天中的洞察 | | **Claude Web** | 浏览器扩展(自动捕获) | 网页聊天中的洞察 | | **Gemini** | 浏览器扩展(自动捕获) | 网页聊天中的洞察 | | **13+ 更多** | 浏览器扩展 | 任何支持的 AI 聊天平台 | 提取的内容 [#提取的内容] 当你提炼对话线程时,AI 按类型创建记忆: | 类型 | 示例 | 标签 | | ------ | ------------------ | -------- | | **决定** | "使用 Redis 进行分布式锁" | 决定、架构 | | **洞察** | "异步回调需要仔细排序" | 洞察、调试 | | **过程** | "重现竞态条件的步骤" | 过程、测试 | | **事实** | "SETNX 如果键被设置返回 1" | 事实、redis | | **经验** | "支付服务的调试会话" | 经验、项目 | 复合效应 [#复合效应] 一个对话线程有用,十个是知识库,一百个就是你的机构记忆。 > "今天初级开发者遇到了同样的 bug。发给他们我的记忆。他们20分钟修复了,而不是3小时。" 调试会话不只是对话,而是给未来自己的可复用知识。 专业提示 [#专业提示] 你不需要提炼每个对话线程。保存重要的会话:突破、架构决定、来之不易的解决方案。 对于敏感代码库,审查你正在保存的内容。对话线程可能包含专有代码或凭据。 下一步 [#下一步] * [你的知识,你做主](/zh/docs/use-cases/shared-memory) -> 自由切换工具,不丢失上下文 * [穿越时间搜索](/zh/docs/use-cases/bi-temporal) -> 从特定时间段找到记忆 * [集成](/zh/docs/integrations) -> 每个工具的设置指南 # 你的知识,你做主 (/docs/zh/use-cases/shared-memory) import VideoPlayer from "@/components/ui/video-player" import { Step, Steps } from 'fumadocs-ui/components/steps'; 问题所在 [#问题所在] 上周你告诉 Claude Code 项目架构。今天,又要向 Cursor 解释一遍。明天,想试试大家都在说的新工具,但又得从零开始。 这不是记忆问题,是绑定问题。你的知识被锁在上一个用的工具里。 > "我已经解释过了。为什么换个工具就得重来?" 解决方案 [#解决方案] Nowledge Mem 是一个知识层,位于你和所有 AI 工具之间。它自动捕获你的洞察,实时同步会话,并撰写每日简报,让每个工具都从你的完整上下文开始。 一条命令连接。零工作流变化。 共享记忆 工作原理 [#工作原理] 一条命令连接 [#一条命令连接] ```bash npx skills add nowledge-co/community/nowledge-mem-npx-skills ``` 适用于 Claude Code、Cursor、Codex、OpenCode、OpenClaw、Alma 等 20+ 个智能体。安装四项技能:工作记忆简报、知识搜索、会话保存和洞察捕获。 安装后,智能体在会话开始时读取你的早间简报,在工作中搜索你的知识,并保存它学到的东西。 捕获自动发生 [#捕获自动发生] 你不需要记着去保存。Mem 从多个渠道捕获: **浏览器扩展 (Exchange v2):** 扩展监控你在 ChatGPT、Claude、Gemini 等 13+ 个平台上的 AI 对话。洞察在你工作时自动捕获。 **会话自动同步:** Claude Code、Cursor、Codex 和 OpenCode 的会话实时同步。一个 3 小时的调试会话无需你输入任何命令就被保存。 **Timeline 输入:** 输入一个想法,粘贴一个 URL,拖入一个文件。用于你想保存特定内容的时候。 **手动命令:** ``` /sum -> 将此对话总结成记忆 /save -> 保存整个对话线程 ``` 每个工具都知情启动 [#每个工具都知情启动] 每天早上,后台智能将简报写入 `~/ai-now/memory.md`。每个连接的 AI 工具在会话开始时读取它。 你的智能体已经知道: * 你正在做什么 * 你最近做了什么决策 * 开放的问题和矛盾 * 你的思维如何演变 不需要重新解释。早上 9 点打开 Claude Code,它从你上次离开的地方继续。 自由切换工具 [#自由切换工具] 新工具?连接到 Mem,立刻拥有你的全部上下文。 **示例:** 你保存了:*"架构决定:使用 Redis 进行会话管理因为..."* 后来,在 Cursor 中:*"帮我添加会话处理"* Cursor 搜索你的知识,找到 Redis 决定,应用相同模式。无需重新解释。 实际示例 [#实际示例] **没有 Nowledge Mem:** > 你:"帮我实现限流" > > Claude:"什么类型?令牌桶?滑动窗口?你的用例是什么?" > > 你:*\[这个月第5次解释]* **有 Nowledge Mem:** > 你:"帮我实现限流" > > Claude:*\[读取工作记忆简报,搜索你的记忆]* "根据你上个月对支付服务使用滑动窗口限流的决定,这是一个匹配你 Redis 模式的实现..." 连接方式 [#连接方式] | 渠道 | 如何工作 | 捕获什么 | | ------------------ | ------------ | ------------------------------------ | | **npx skills** | 一条命令,20+ 智能体 | 工作记忆、搜索、保存、提炼 | | **浏览器扩展** | 自动捕获 AI 对话 | 来自 ChatGPT、Claude、Gemini 等 13+ 平台的洞察 | | **会话自动同步** | 实时监控 | Claude Code、Cursor、Codex、OpenCode 会话 | | **MCP** | 直接协议连接 | 任何兼容 MCP 的工具 | | **Claude Desktop** | 一键扩展 | 完整集成 | | **内置支持** | 在设置中切换 | DeepChat、LobeHub | 复合效应 [#复合效应] 用几周后,新连接的工具立刻知道你的工作方式。偏好跨工具保持。决策持续累积。你保存过的每条洞察都可被你将来使用的每个工具找到。 价值积累在 Mem 里,不在任何单个工具上。 下一步 [#下一步] * [永不丢失会话](/zh/docs/use-cases/session-backup) -> 自动同步和备份 AI 对话 * [穿越时间搜索](/zh/docs/use-cases/bi-temporal) -> 找到你当时知道的 * [集成](/zh/docs/integrations) -> 连接所有工具 # Get Evolves Edges (/docs/api/agent/evolves/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get EVOLVES relationships. Edge direction: older → newer. When memory\_id is provided, returns only edges where that memory participates (as either the older or newer node). Use this to get the full version chain for a specific memory. # Get Agent Status (/docs/api/agent/status/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get the Knowledge Agent's current status. # Get Working Memory (/docs/api/agent/working-memory/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Read the Working Memory file (\~/ai-now/memory.md). Returns today's WM by default, or an archived day's WM if date is provided. This is the single source of truth for WM content — feed events are snapshots. # Update Working Memory (/docs/api/agent/working-memory/put) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Write the Working Memory file from user edits. Validates structure, writes to \~/ai-now/memory.md, and emits a feed event with edited\_by="user" to distinguish from agent-generated updates. # Get Community Details (/docs/api/communities/community_id/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get community details including entities and sample memories. # Get Favorite Memories (/docs/api/favorites/memories/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get all favorite memories. # Get Favorite Threads (/docs/api/favorites/threads/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get all favorite threads. # Get Graph Analysis (/docs/api/graph/analysis/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get comprehensive graph analysis including community and centrality metrics. This endpoint provides a complete overview of the graph structure, communities, and centrality measures without triggering new calculations. # Graph Analysis Health (/docs/api/graph/health/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Health check for graph analysis service. Returns the status of algo extension and graph analysis capabilities. # Cleanup Orphaned Entities (/docs/api/graph/orphans/delete) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Clean up all orphaned entities from the graph. This safely removes Entity nodes that have no relationships: * No MENTIONS from any Memory * No RELATES\_TO connections to other entities * No HAS\_LABEL relationships This operation only affects Entity nodes and will not delete: * Internal system nodes (GraphMeta, migrations, etc.) * Label nodes * Community nodes * Memory nodes # Find Orphaned Entities (/docs/api/graph/orphans/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Find all orphaned entities in the graph. Orphaned entities are Entity nodes that have no relationships: * No MENTIONS from any Memory * No RELATES\_TO connections to other entities * No HAS\_LABEL relationships * No BELONGS\_TO community relationships This only checks Entity nodes, not internal system nodes like schema versions. # Get Graph Data (/docs/api/graph/sample/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get graph data for visualization. # Search Graph (/docs/api/graph/search/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Enhanced graph search that finds relevant content and builds visualization data. # Delete Label (/docs/api/labels/label_id/delete) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Delete a label and all its relationships. # Get Label (/docs/api/labels/label_id/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get a specific label by ID. # Update Label (/docs/api/labels/label_id/put) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Update an existing label. # Distill Memories From Thread (/docs/api/memories/distill/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Create memories from thread content after distillation. This endpoint actually creates memories in the database based on the distillation type. For knowledge graph mode, it includes entity and relationship metadata. # Delete Memory (/docs/api/memories/memory_id/delete) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Delete a memory and optionally its relationships. # Get Memory (/docs/api/memories/memory_id/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get a specific memory by ID with associated labels. # Update Memory (/docs/api/memories/memory_id/patch) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Update memory properties like importance, title, and content. # Reindex Memories Bulk (/docs/api/memories/reindex/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Reindex multiple memories or all memories needing reindex. This endpoint can work for both single and bulk reindexing: * If memory\_ids is provided: reindex those specific memories * If memory\_ids is None/empty: reindex all memories with reindex\_needed=True # Search Memories (/docs/api/memories/search/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Memory search with filtering, metadata, and reasoning support. # Reindex Search Index (/docs/api/search-index/reindex/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Rebuild the search index from Kuzu database. This performs a full reindex of: * All memories (with search embeddings) * All thread messages * All communities * All entities The embedding model is platform-specific: * macOS Apple Silicon: Qwen3-Embedding via mlx-embeddings * Windows/Linux: BGE-M3 via FastEmbed/ONNX This is a heavy operation and should only be triggered: * After first downloading the search embedding model * After a data migration * When explicitly requested by the user # Get Search Index Status (/docs/api/search-index/status/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get status of the search index (LanceDB + hybrid search). The embedding model is platform-specific: * macOS Apple Silicon: Qwen3-Embedding via mlx-embeddings * Windows/Linux: BGE-M3 via FastEmbed/ONNX This endpoint checks: * Whether the search embedding model is cached locally * Whether the search index service is initialized # Search Sources (/docs/api/sources/search/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Full-text search across source names and content. # Delete Source (/docs/api/sources/source_id/delete) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Delete a source and its search index records. # Get Source Detail (/docs/api/sources/source_id/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get source detail with related memories and revision chain. # Update Source (/docs/api/sources/source_id/patch) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Update source lifecycle state. Supported actions: * 'reparse': Re-run parse → chunk → index pipeline * 'mark\_stale': Mark source as stale (needs re-processing) # Bulk Delete Threads (/docs/api/threads/bulk/delete) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Delete multiple threads and optionally their extracted memories. # Import Bulk Threads (/docs/api/threads/import-bulk/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Import selected threads from a bulk export. This endpoint starts a background import job and returns immediately. Use the job\_id to poll for progress. # Get Import Config (/docs/api/threads/import-config/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get the current import configuration. # Update Import Config (/docs/api/threads/import-config/put) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Update import configuration. # Parse Thread Content (/docs/api/threads/parse/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Parse thread content from various formats. # Parse Bulk Export (/docs/api/threads/parse-bulk/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Parse all threads from a bulk export file. This endpoint parses the export file and returns summaries of all threads found. The full thread content is not returned here to keep the response size manageable. # Search Threads Full (/docs/api/threads/search/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Full thread search with message matching. # Get Thread Summaries (/docs/api/threads/summaries/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get all thread titles/summaries. # Delete Thread (/docs/api/threads/thread_id/delete) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Delete a thread and optionally its extracted memories. # Get Thread (/docs/api/threads/thread_id/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get a complete thread with messages. # Get Feed Events (/docs/api/agent/feed/events/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get feed events from time-partitioned JSONL files. Supports two filtering modes: * last\_n\_days: N days back from today (default) * date\_from + date\_to: explicit date range (YYYY-MM-DD) Both modes can be combined with event\_type, severity, and unresolved\_only. # Get Knowledge Processing Status (/docs/api/agent/knowledge-processing/status/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get knowledge processing settings and status. # Trigger Community Detection (/docs/api/agent/trigger/community-detection/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Manually trigger community detection on the knowledge graph. # Trigger Crystallization (/docs/api/agent/trigger/crystallization/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Manually trigger a crystallization review. # Trigger Daily Briefing (/docs/api/agent/trigger/daily-briefing/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Manually trigger a daily briefing. # Trigger Insight Detection (/docs/api/agent/trigger/insight-detection/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Manually trigger proactive insight detection. # Trigger Kg Extraction (/docs/api/agent/trigger/kg-extraction/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Manually trigger KG extraction (backfill, targeted, or scoped to specific memories). # Get Working Memory History (/docs/api/agent/working-memory/history/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} List dates that have archived Working Memory files. Scans \~/ai-now/memory-archive/ for YYYY/MM/YYYY-MM-DD.md files. Returns newest-first. # Get Entity Relationships (/docs/api/entities/entity_id/relationships/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get relationships for a specific entity. Returns all connected entities and memories via RELATES\_TO and MENTIONS relationships. # List Augmentation Jobs (/docs/api/graph/augmentation/jobs/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} List recent augmentation jobs. Optionally filter by status (pending, running, completed, failed). # Start Augmentation Job (/docs/api/graph/augmentation/start/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Start a background augmentation job. Supports job types: * 'community\_detection': Apply Louvain community detection * 'pagerank\_calculation': Apply PageRank importance calculation * 'undo\_community\_detection': Remove community detection augmentation * 'undo\_pagerank\_calculation': Remove PageRank augmentation # Get Augmentation State (/docs/api/graph/augmentation/state/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get the current graph augmentation state. Returns information about which augmentations are currently applied, their parameters, and the last augmentation timestamp. # Expand Neighbors (/docs/api/graph/expand/node_id/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Expand neighbors of a specific node to get connected nodes and edges with depth-based traversal. # Preview Distillation (/docs/api/memories/distill/preview/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Preview distillation results without creating memories in the database. This endpoint processes content and returns distilled data for user review before they decide to save the memories. Returns a cache\_key that can be used to reuse these results in the actual distillation call. Supports two modes: 1. Simple LLM summarization - just extract key memories 2. Knowledge graph extraction - extract entities, relationships, and memories # Export Memory (/docs/api/memories/memory_id/export/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Export a memory in various formats. # Toggle Memory Favorite (/docs/api/memories/memory_id/favorite/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Toggle memory favorite status. # Get Memory Labels (/docs/api/memories/memory_id/labels/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get labels assigned to a memory. # Get Reindex Status (/docs/api/memories/reindex/status/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get status of memories needing reindex. # Install Bge M3 (/docs/api/models/bge-m3/install/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Download and install the search embedding model for hybrid search. The model downloaded is platform-specific: * macOS Apple Silicon: Qwen3-Embedding (\~400MB, 4-bit quantized) * Windows/Linux: BGE-M3 (\~542MB, INT8 quantized) This also deletes the old E5 embedding model to save space. After installation, run /search-index/reindex to build the index. # Get Bge M3 Status (/docs/api/models/bge-m3/status/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Check the status of the search embedding model for hybrid search. The model is platform-specific: * macOS Apple Silicon: Qwen3-Embedding (1024-dim, \~400MB) * Windows/Linux: BGE-M3 (1024-dim, \~542MB) Both provide high-quality multilingual embeddings for LanceDB hybrid search. # Ingest File (/docs/api/sources/ingest/file/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Ingest a file through the full source pipeline. Accepts a multipart file upload. The file is saved to a temp location, then processed through ingest → parse → chunk → index. # Ingest File Path (/docs/api/sources/ingest/file-path/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Ingest a file by local filesystem path (desktop app bridge). Unlike the multipart upload endpoint, this accepts a path to a file already on disk. Used by the Tauri desktop app. # Ingest Url (/docs/api/sources/ingest/url/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Fetch a URL and ingest through the source pipeline. Uses browse-now for authenticated content, falls back to httpx. # Get Source Content (/docs/api/sources/source_id/content/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Read the parsed content of a source for preview. Returns the markdown content produced by markitdown during parsing. Works uniformly for files, URLs, and notes — all store parsed .md on disk. # Trigger Source Extraction (/docs/api/sources/source_id/extract/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Trigger knowledge extraction from a source (Learn lifecycle). User clicks 'Learn' button on a source in the Library. Queues a source\_extraction task for the Knowledge Agent. Returns 202-style response immediately (task runs in background). # Get Source Raw (/docs/api/sources/source_id/raw/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Serve the raw source file for native preview (PDF, DOCX, etc). # Refetch Source (/docs/api/sources/source_id/refetch/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Re-fetch a URL source's content using the browser and re-parse. Useful when the initial fetch captured an SPA shell or stale content. Only works for URL-type sources. # Discover Conversations (/docs/api/threads/conversations/discover/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Discover conversation files from AI coding assistants. Scans file system for conversation files from Claude Code, Codex, Cursor, and OpenCode. # Export Conversation Raw (/docs/api/threads/conversations/export-raw/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Export a raw conversation file as markdown or JSON without importing. Parses the session file using the same parsers as import, but returns formatted content directly instead of creating a thread. # Import Conversation (/docs/api/threads/conversations/import/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Import a conversation file into Nowledge Mem. Converts external conversation formats (Claude Code, Codex, Cursor, OpenCode) into threads. # Hide Project (/docs/api/threads/import-config/hide-project/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Hide a project from the browse view. # Hide Session (/docs/api/threads/import-config/hide-session/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Hide a session from the browse view. # Unhide Project (/docs/api/threads/import-config/unhide-project/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Unhide a project. # Unhide Session (/docs/api/threads/import-config/unhide-session/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Unhide a session. # Save Session (/docs/api/threads/sessions/save/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Save coding session(s) as conversation thread(s). Auto-detects sessions from project\_path. Creates new thread or appends to existing (with deduplication). Supports Claude Code and Codex. Args: request: Session save request with client, project\_path, and options Returns: SessionSaveResponse with results for each processed session # Append Messages To Thread (/docs/api/threads/thread_id/append/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Append messages to existing thread (for MCP integration). Supports two modes: 1. Direct messages: `{"messages": [...]}` 2. File-based: `{"file_path": "...", "format": "auto"}` Optional controls: * `deduplicate` (default: true) * `idempotency_key` (string; used to derive stable external\_ids) # Get Thread Coverage (/docs/api/threads/thread_id/coverage/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Read-only coverage report for debugging progress issues. # Export Thread (/docs/api/threads/thread_id/export/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Export a thread in various formats. # Toggle Thread Favorite (/docs/api/threads/thread_id/favorite/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Toggle favorite status for a thread. # Start Watcher (/docs/api/threads/watcher/start/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Start the session watcher for auto-importing sessions. # Get Watcher Status (/docs/api/threads/watcher/status/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get the current status of the session watcher. # Stop Watcher (/docs/api/threads/watcher/stop/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Stop the session watcher. # Delete Feed Event (/docs/api/agent/feed/events/event_id/delete) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Soft-delete a feed event by marking deleted=True in the JSONL file. # Persist Question (/docs/api/agent/feed/input/persist-question/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Persist a question + agent response as a feed event (JSONL). Called by the frontend after agent streaming completes for questions. Does NOT create a memory — only writes the event for timeline persistence. # Submit Feed Input Stream (/docs/api/agent/feed/input/stream/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Stream agent processing of feed input via Wire Protocol. This is the agent-first approach: the agent classifies input, searches the knowledge base, and provides streaming responses. Returns Server-Sent Events (SSE) with Wire Protocol messages: * turn\_begin: Agent turn started * step\_begin: New processing step * text: Text content from agent * thinking: Agent's reasoning (if enabled) * tool\_call: Agent called a tool * tool\_result: Tool returned a result * turn\_end: Agent turn completed # Get Job Status (/docs/api/graph/augmentation/status/job_id/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get the status of a specific augmentation job. Returns job progress, status, and any error messages. # Apply Memory Kg Extraction (/docs/api/memories/memory_id/extract-kg/apply/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Apply knowledge graph extraction results to a memory. This endpoint saves the extracted entities and relationships to the graph database and updates the memory's metadata to track the extraction. # Preview Memory Kg Extraction (/docs/api/memories/memory_id/extract-kg/preview/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Preview knowledge graph extraction for a memory. This endpoint extracts entities and relationships from a memory's content using the local LLM, providing a preview without saving to the database. # Remove Label From Memory (/docs/api/memories/memory_id/labels/label_id/delete) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Remove a label from a memory. # Assign Label To Memory (/docs/api/memories/memory_id/labels/label_id/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Assign a label to a memory. # Get Source Image (/docs/api/sources/source_id/images/filename/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Serve an extracted image from a source's images/ directory. # Get Import Status (/docs/api/threads/import-bulk/job_id/status/get) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get the status of a bulk import job. # Resolve Event (/docs/api/agent/feed/events/event_id/resolve/post) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Resolve an action-required feed event and optionally execute graph mutations. Resolution marks the event as resolved in the JSONL file. Action (optional) executes a graph mutation: * delete\_memory: Delete all specified memories * keep\_newer: Delete the first (older) memory, keep the rest