Knowledge Graph
Your knowledge as an explorable, auditable, generative network. Navigate it visually, or hand a slice of it to Graph Intelligence and reason about it together.
Everything you save in Mem lives on one Super Knowledge Graph: three knowledge forms (Trace, Unit, Crystal) on top of seven node types and eleven edge types, designed for progressive disclosure. Light queries first. Walk relationships when you need more context. Walk the version chain when you need history. Look at communities and Crystals when you want the big picture.
The Knowledge Graph view is where that graph stops being a metaphor and becomes something you can pan, click, and ask questions of. Graph Intelligence is the agent that shares the canvas with you. You select; it reasons. You ask; the graph highlights. The conversation produces durable artifacts: Crystals, reports, persisted exploration sessions.
The Library answers what do I know? The Knowledge Graph answers how does it connect, and what should I do with it next?
First useful action
- Open the Graph view from the sidebar. The overview shows your most-connected memories, entities, and Crystals.
- Click any node, or run Compute to detect topic communities and color the graph by cluster.
- Open the Chat tab in the right panel and ask one real question about what you're looking at. "What is this cluster about?" "Where does this contradict itself?" "Find the shortest path between these two."
The agent reads your selection, calls the right tools, and writes the answer on the same canvas. You should be able to answer a real question about your own knowledge without leaving the graph.
Two ways to use the graph
Visually, on your own
Your knowledge as an interactive network. Drag to pan, scroll to zoom, click a node to inspect its details and edges. The timeline slider filters by date so you can watch a domain grow over weeks or months. Hold Cmd/Ctrl to multi-select; switch to lasso mode to draw a region. Compute runs Louvain community detection and colors the graph by cluster, so the topic structure becomes visible instead of buried under 1,000 dots.
With the Graph Intelligence Agent
The Chat tab opens a reasoning partner that shares your canvas. When you select something, the agent reads the same selection. Its tool calls light up the graph in real time, so you can audit its reasoning while it works.
Useful question patterns:
- Ask "What is going on here?" with a community selected. The agent summarises the cluster's central themes and hands you back a list of the strongest sources.
- Ask "Where does this contradict itself?" with a topic, entity, or memory selected. The agent walks the EVOLVES chain and surfaces the disagreements.
- Multi-select two nodes and ask "Find the shortest path between these two." The agent walks the graph and explains every hop, often via bridge entities you wouldn't have spotted.
- Ask "Draft a brief from this." with a cluster selected. The agent produces a Crystal or a report. You decide whether to keep it.
What makes this not "AI chat with a graph backend"
A few architectural decisions add up to a different kind of product.
Frozen context, single source of truth. When you send a message, the agent sees the exact graph state you were looking at. No drift, no split-brain, no state mirror to keep in sync. You move the canvas while the agent thinks; that just means the next turn picks up the new view. The conversation thread itself is the source of truth.
Visual reasoning chain. Every tool the agent calls can emit a canvas command: highlight_nodes, highlight_path, select_community. You see the agent's reasoning as it happens, not as a black-box answer. If the agent says "the path goes through these three bridges", those three bridges are already lit up on the canvas.
Artifacts that fit your existing surfaces. When the agent produces a Crystal, that Crystal is a real Memory with is_crystal=true. When it produces a report, that report is a real Source in the Library. There are no new entity types to learn. Anything the agent saves is searchable, linkable, and editable from the same surfaces you already know.
Step-based history. Every turn captures both the user's frozen graph context and the agent's emitted canvas commands as message metadata. You can replay an exploration, resume it later, or share it. The conversation is not just a chat log; it is a complete record of an investigation.
What the agent can do
Twenty or so specialised tools, grouped by what they answer for you:
- Navigate the graph. Shortest paths between concepts, walks across
RELATES_TOedges with their temporal context, batch-lookup of selected node details, neighbor traversal, community membership. - Find evidence. Source memories behind a Crystal or entity, the EVOLVES chain showing how a piece of knowledge changed, the source documents that ground a claim, the past threads where you explored a topic.
- Analyze structure. Subgraph PageRank for centrality, bridge entities that connect communities, community summaries with member counts and key entities.
- Synthesize and save. A Crystal that distils 3+ source memories into a stable reference page (with
[[Entity Name]]wikilinks already woven in), or a longer report or blog draft that lands in the Library and re-enters the search index.
You don't pick the tool; you ask the question and the agent picks. What we expose to you is the discipline: every claim it makes is traceable to memories, sources, or threads you can open. Crystals require three or more independent sources. Reports cite their evidence.
From the Library, with Investigate
Most of the time you won't start in the Graph view. You'll be reading the Library when something raises a real question. Click Investigate on any wiki page (entity, crystal, or topic). The Graph view opens with that node already selected, or, for a topic, with every entity in the cluster highlighted at once. The Graph Intelligence chat shows an "Investigating topic: <name>" banner above the input, and any Crystal it creates from that session carries the cluster as its anchor.
That handoff is the loop: read in Library, dig in Graph, save back to Library.
Boundaries
A few things stay out of your way on purpose.
- The agent only proposes. Crystals and reports it produces show up as artifacts in chat. You decide what to keep.
- The agent reads the same canvas you do. There is no hidden context. If you cannot see it, neither can the agent.
- Graph Intelligence runs through your configured Remote LLM. Set it in Settings > LLM Providers.
- A heavy run produces tokens. Background and on-demand work share separate budgets and rate limits, configurable in Settings > Knowledge Processing.
Where to go next
- LLM Wiki: the model behind the Library wiki and the Library → Investigate → Graph Intelligence loop.
- Library: the read-side surface where most Investigate sessions begin.
- Background Intelligence: the work that builds and maintains the graph in the first place.
- Crystals: how stable reference pages are synthesised, and what the agent's
CreateCrystalactually produces. - Building memory systems for AI agents: the long-form essay on the Super Knowledge Graph and what makes it more than a regular knowledge graph. The post also embeds the original Chinese-language talk recording.