Library
Import and search your documents alongside your memories
The Library is where your files become usable knowledge.
It is for source material that should stay whole: PDFs, reports, spreadsheets, slide decks, markdown notes, and code. Mem parses them, indexes them, and lets them work alongside your memories instead of living as isolated attachments.
Library vs memories
Use the Library for source material you want to preserve whole. Use memories for the durable takeaways. A strong workflow is: import the document, ask grounded questions against it, then deep-learn it only when you want its knowledge to become lasting memories and graph connections.
Drop a 40-page architecture review into the Library. Ask in the Timeline: "What does the review say about API rate limits?" The answer cites page 12 of the document and a Redis decision you saved three months ago. Your documents and your memories search together.
The Library stores PDFs, spreadsheets, Word files, presentations, code, and other formats. Content is parsed, split into searchable segments, and indexed. Every document becomes searchable from the Timeline, global search, and connected AI tools.
The First Useful Document
If you are new, import one document you actually care about. Then ask one concrete question about it in the Timeline.
That is the core loop:
- add one real source
- ask one grounded question
- see the answer use both the document and your existing knowledge
That is enough for a first proof. Deep-learning can wait until this basic loop already feels useful.
Supported Formats
| Format | Extensions | What Happens |
|---|---|---|
| Text extracted with layout awareness, split into segments, indexed | ||
| Word | .docx | Parsed to text with image extraction, segmented, indexed |
| Presentations | .pptx | Slide content extracted with images, indexed |
| Spreadsheets | .xlsx, .csv | Parsed to markdown tables, indexed. Multi-sheet XLSX renders as tabs |
| Markdown | .md | Parsed and indexed directly |
| Plain text | .txt, .org | Indexed as-is |
| Code | .py, .js, .ts, .rs, .go, .java, .c, .cpp, .rb, .swift | Indexed |
| URL | .html, .pdf | Converted to markdown, indexed |
Adding Documents
Drag files into the Timeline input, or use the Library view to import. You can also drag entire folders — all supported files inside will be imported automatically.
Documents go through a processing pipeline:
- Parsing: content extracted from the file format
- Segmentation: split into searchable chunks
- Indexing: added to both vector and keyword search indexes
Processing status is visible in the Library view. Once indexed, the document is Searchable — ready to use in conversations, global search, and connected AI tools.
Searchable vs Deep-learned
Every document in the Library is in one of two states:
| State | What it means | How it happens |
|---|---|---|
| Searchable | Content is parsed, segmented, and indexed. AI can read and reference the document when you ask about it in the Timeline. | Automatic — happens when you import a file. |
| Deep-learned | Full AI extraction session produces structured memories, graph connections, and cross-references. The document's knowledge joins your memory graph permanently. | Opt-in — click Deep-learn on a source. |
Asking about a file in the Timeline reads its content directly. Deep-learning extracts persistent knowledge that connects to your memory graph.
What deep-learning produces
When you deep-learn a source, the AI analyzes the content — computing statistics for spreadsheets, reading text for documents — and creates:
- Memories — 2-5+ atomic insights per document (decisions, facts, procedures), each searchable on its own
- Graph connections — links to your existing related memories, surfacing relationships you might not have noticed
- Crystals — synthesized crystals when 3+ memories cluster around a topic
- Contradiction detection — flags conflicts with existing knowledge (e.g., a new policy that reverses a previous decision)
Deep-learning uses AI processing time. The result count appears in the pipeline indicator after completion (e.g., "Deep-learned (5)").
Searching Documents
Documents are searched alongside memories. A Timeline question like "What does the Q4 report say about churn?" searches both your saved memories and any imported documents that match.
In the Library view, you can filter by status — Searchable, Deep-learned, Stale, or Error — to find sources that haven't been deep-learned yet or need attention.
Chat with Your Documents
Ask questions about any document directly in the Timeline. The answer draws from both the document and your memories, citing specific pages.
"What does the architecture review say about API rate limits?" returns an answer referencing page 12 of the document and your Redis decision from three months ago.
Batch Actions
Select multiple documents in the Library view and:
- Send to AI Now for cross-document analysis — compare reports, synthesize findings, or ask questions that span multiple documents
- Batch deep-learn — select sources that haven't been deep-learned and click "Deep-learn (N)" to process them all. For sources already deep-learned, the button shows "Re-analyze (N)" to refresh their extracted knowledge
Documents, Memories, and Threads
Three types of content, each with its own purpose:
| Type | What it is | Example |
|---|---|---|
| Memory | An atomic insight, decision, or fact | "We chose PostgreSQL for jsonb support" |
| Document | Reference material imported whole | A 40-page architecture review PDF |
| Thread | An AI conversation archive | Your ChatGPT session about async patterns |
Documents and threads are sources. Memories are the distilled knowledge. When you deep-learn a document or thread, individual insights get extracted as memories and connected to the knowledge graph. The original stays in the Library or Threads view as the source.
Next Steps
- Getting Started: The Timeline and all ways to add knowledge
- Background Intelligence: How imported knowledge connects to your graph
- Search & Relevance: How search ranks results across memories and documents