Never Lose a Session
Backup brilliant AI conversations(in Cursor, Codex, Claude Code, etc.) before they disappear. Distill them into permanent knowledge.
The Problem
You just had an epic debugging session. Three hours with Claude Code. You found a race condition, traced it through 15 files, built a bulletproof fix with tests.
But AI conversations are ephemeral. Context gets compacted. Token limits hit. Sessions expire. That brilliant 200-message thread? The early context is already fading, compressed, or lost to make room for new messages.
"I solved this exact problem before. I just can't remember how. Or where. Or when."
The insights are scattered across sessions you can barely search. The reasoning behind decisions? Gone. The debugging steps that finally worked? Lost in compaction.
The Solution
Nowledge Mem lets you backup entire AI conversations, then distill them into permanent, searchable, connected knowledge.
How It Works
Save the Thread
During or after your session:
Claude Code:
/saveSelect nowledge-mem:save (MCP) and press Enter.
Codex CLI:
/saveSelect prompts:save_session and press Enter.
Browser (ChatGPT, Gemini, Claude):
Click the Nowledge Mem extension icon → "Import Thread"
The full conversation is saved, every message, every code block, every insight.
Distill Into Memories
Open Nowledge Mem and navigate to Threads.
Select the imported thread and click Distill.
The AI reads the entire conversation and extracts:
- Decisions: "Chose sliding window over token bucket because..."
- Insights: "Race conditions in async callbacks need mutex locks"
- Patterns: "Testing time-based bugs requires mock clocks"
- Facts: "Redis SETNX provides atomic lock acquisition"
Each becomes a standalone, searchable memory with proper labels.
Explore the Graph
Your new memories are automatically connected to:
- Previous work on the same codebase
- Related architectural decisions
- Relevant entities (technologies, concepts, people)
Open Graph View to see how this session connects to everything else you know.
Search Anytime
Three months later, similar bug appears:
Search: "payment race condition"
Nowledge Mem returns the full context, the problem, the debugging steps, the solution, the test approach.
No more re-solving solved problems.
What Gets Extracted
When you distill a thread, the AI creates memories categorized by type:
| Type | Example | Labels |
|---|---|---|
| Decision | "Used Redis for distributed locking" | decision, architecture |
| Insight | "Async callbacks need careful ordering" | insight, debugging |
| Procedure | "Steps to reproduce race conditions" | procedure, testing |
| Fact | "SETNX returns 1 if key was set" | fact, redis |
| Experience | "Debugging session on payment service" | experience, project |
Saving from Different Tools
Claude Code
Press /
Type save
Select nowledge-mem:save (MCP)
Press Enter
The thread is saved to the codebase/folder where Claude Code is running.
Codex CLI
Press /
Type save
Select prompts:save_session
Press Enter
Codex lists available sessions and saves your selection.
Browser Extension (ChatGPT, Gemini, Claude Web)
Navigate to your conversation in the browser
Click the Nowledge Mem extension icon
Click "Import to Nowledge Mem"
The entire thread is imported with one click.
File Import
Export your conversation as markdown or JSON, then:
Open Nowledge Mem
Go to Threads → Import
Drop your file
See Integrations for supported formats.
The Compound Effect
One thread saved is useful.
Ten threads saved is a knowledge base.
A hundred threads? That's institutional memory.
"Junior dev hit the same bug today. Sent them my memory. They fixed it in 20 minutes instead of 3 hours."
Your debugging sessions aren't just conversations. They're training data for your future self.
Pro Tips
Distill Selectively
You don't need to distill every thread. Save important sessions, the breakthroughs, the architectural decisions, the hard-won solutions.
Review Before Saving
For sensitive codebases, review what you're saving. Threads might contain proprietary code or credentials.
Label Consistently
Use consistent labels across memories. debugging, architecture, decision work better than random tags.
Next Steps
- Give Your AI a Memory → Share context across all your AI tools
- Search Through Time → Find memories from specific time periods
- Integrations → Setup guides for each tool