Library
Import and search your documents alongside your memories
The Library is where your files become usable knowledge.
It is for source material that should stay whole: PDFs, reports, spreadsheets, slide decks, markdown notes, and code. Mem parses them, indexes them, and lets them work alongside your memories instead of living as isolated attachments.
Library vs memories
Use the Library for source material you want to preserve whole. Use memories for the durable takeaways. A strong workflow is: import the document, ask grounded questions against it, then deep-learn it only when you want its knowledge to become lasting memories and graph connections.
Drop a 40-page architecture review into the Library. Ask in the Timeline: "What does the review say about API rate limits?" The answer cites page 12 of the document and a Redis decision you saved three months ago. Your documents and your memories search together.
The Library stores PDFs, spreadsheets, Word files, presentations, code, and other formats. Content is parsed, split into searchable segments, and indexed. Once a document is Searchable, it shows up everywhere you already think from:
- AI Now: ask about a file by name, topic, or question. AI Now searches the Library, reads matching passages, and cites them alongside your memories.
- The Timeline Feed Agent: the built-in background Agent that turns conversations into memories can now also reach into Library documents when a thread references one.
- Graph Intelligence Agent: when exploring your graph, Source nodes can be searched and read directly, so you can move from a cluster of related memories into the underlying document without leaving the canvas.
- Connected AI tools via MCP: Claude Code, Cursor, and any other MCP-aware client can call
query_sources,read_source_content,search_source_chunks, andanalyze_source_datato ground their answers in your actual files. - The
nmemCLI: terminal and script workflows can search, read, and analyze the Library withnmem sources search,read,search-chunks, andanalyze. See the CLI reference.
Reading the Library as a wiki
In v0.8 the Library also reads as a wiki. The memories, sources, and Crystals you have been collecting now show up as topic pages, entity pages, and crystal pages, cross-linked so you can click your way through them.
Switch to the Wiki tab inside the Library to see your knowledge grouped by topic. Each card lists the concepts most-discussed in that cluster and a few Crystals that summarise them. Click a card to open its topic page, click an entity to open its wiki entry, click an [[Entity]] chip inside a Crystal to keep going.
When reading raises a real question, click Investigate on any wiki page. The Knowledge Graph opens with the right thing already selected: a single node for an entity or crystal page, the whole cluster for a topic page. From there you can pan, expand neighbours, or hand the selection to the Graph Intelligence Agent and ask what is actually going on.
For the underlying model (what the system maintains for you, what you stay in charge of), see the LLM Wiki concept page.
Wiki Export
The Wiki tab has a Download button on its tab row. It packages your wiki as a portable markdown folder with index.md, topics/, entities/, and crystals/ directories, cross-linked with the same [[wikilinks]] you see inside Mem. The folder opens directly in Obsidian, Logseq, or any markdown reader.
Re-export whenever you want a fresh copy. The export is a snapshot, not a sync target: edits you make outside Mem do not flow back automatically.
The First Useful Document
If you are new, import one document you actually care about. Then ask one concrete question about it in the Timeline.
That is the core loop:
- add one real source
- ask one grounded question
- see the answer use both the document and your existing knowledge
That is enough for a first proof. Deep-learning can wait until this basic loop already feels useful.
Supported Formats
| Format | Extensions | What Happens |
|---|---|---|
| Native text is extracted with layout awareness; scanned pages can be read with a configured Vision model | ||
| Word | .docx | Parsed to text with image extraction, segmented, indexed |
| Presentations | .pptx | Slide content extracted with images, indexed |
| Spreadsheets | .xlsx, .csv | Parsed to markdown tables, indexed. Multi-sheet XLSX renders as tabs |
| Markdown | .md | Parsed and indexed directly |
| Plain text | .txt, .org | Indexed as-is |
| Code | .py, .js, .ts, .rs, .go, .java, .c, .cpp, .rb, .swift | Indexed |
| URL | .html, .pdf | Converted to markdown, indexed |
Adding Documents
Drag files into the Timeline input, or use the Library view to import. You can also drag entire folders — all supported files inside will be imported automatically.
Documents go through a processing pipeline:
- Parsing: content extracted from the file format
- Segmentation: split into searchable chunks
- Indexing: added to both vector and keyword search indexes
Processing status is visible in the Library view. Once indexed, the document is Searchable — ready to use in conversations, global search, and connected AI tools.
For scanned PDFs, Mem first finishes the normal import so the app stays responsive. If a page has no text layer and needs a Vision model, Mem reads that scanned text in the background and adds it back into the document. If the Vision model is not configured, or your background budget is reached, the Library pauses the work and shows Continue when it can be resumed.
Searchable vs Deep-learned
Every document in the Library is in one of two states:
| State | What it means | How it happens |
|---|---|---|
| Searchable | Content is parsed, segmented, and indexed. AI can read and reference the document when you ask about it in the Timeline. | Automatic — happens when you import a file. |
| Deep-learned | Full AI extraction session produces structured memories, graph connections, and cross-references. The document's knowledge joins your memory graph permanently. | Opt-in — click Deep-learn on a source. |
Asking about a file in the Timeline reads its content directly. Deep-learning extracts persistent knowledge that connects to your memory graph.
What deep-learning produces
When you deep-learn a source, the AI analyzes the content — computing statistics for spreadsheets, reading text for documents — and creates:
- Memories — 2-5+ atomic insights per document (decisions, facts, procedures), each searchable on its own
- Graph connections — links to your existing related memories, surfacing relationships you might not have noticed
- Crystals — synthesized crystals when 3+ memories cluster around a topic
- Contradiction detection — flags conflicts with existing knowledge (e.g., a new policy that reverses a previous decision)
Deep-learning uses AI processing time. The result count appears in the pipeline indicator after completion (e.g., "Deep-learned (5)").
Searching Documents
Documents are searched alongside memories. A Timeline question like "What does the Q4 report say about churn?" searches both your saved memories and any imported documents that match.
In the Library view, you can filter by status — Searchable, Deep-learned, Stale, or Error — to find sources that haven't been deep-learned yet or need attention.
Chat with Your Documents
Ask questions about any document directly in the Timeline. The answer draws from both the document and your memories, citing specific pages.
"What does the architecture review say about API rate limits?" returns an answer referencing page 12 of the document and your Redis decision from three months ago.
Batch Actions
Select multiple documents in the Library view and:
- Send to AI Now for cross-document analysis — compare reports, synthesize findings, or ask questions that span multiple documents
- Batch deep-learn — select sources that haven't been deep-learned and click "Deep-learn (N)" to process them all. For sources already deep-learned, the button shows "Re-analyze (N)" to refresh their extracted knowledge
Documents, Memories, and Threads
Three types of content, each with its own purpose:
| Type | What it is | Example |
|---|---|---|
| Memory | An atomic insight, decision, or fact | "We chose PostgreSQL for jsonb support" |
| Document | Reference material imported whole | A 40-page architecture review PDF |
| Thread | An AI conversation archive | Your ChatGPT session about async patterns |
Documents and threads are sources. Memories are the distilled knowledge. When you deep-learn a document or thread, individual insights get extracted as memories and connected to the knowledge graph. The original stays in the Library or Threads view as the source.
Next Steps
- Getting Started: The Timeline and all ways to add knowledge
- Background Intelligence: How imported knowledge connects to your graph
- Search & Relevance: How search ranks results across memories and documents