Key takeaways from Nate B Jones' second brain series — from the original 8 building blocks to Open Brain (Postgres + MCP), the two-door principle, and the implementation gap.
Notes on Nate B Jones' framework for building an AI-powered second brain in 2026. The central thesis: the second brain is no longer a passive storage system — it's an active, automated system that works while you sleep.
The traditional second brain (Tiago Forte, PARA, Zettelkasten) requires constant manual effort: capture, organize, distill, express. In 2026, AI enables automating most of that work. The user only needs to do one thing: capture the thought. The system handles the rest.
One frictionless entry point. In Jones' example, a private Slack channel. The key: zero friction. If capturing requires more than one step, you won't do it consistently.
An AI agent that automatically classifies each thought without user intervention. It receives raw text and decides: is this a person, a project, an idea, or an admin task?
Each entry type has a fixed schema: name, status, next action. Schema consistency is what allows automation to work reliably.
Storage organized by categories. In the example: Notion databases for people, projects, ideas, admin, and an inbox log.
An "Inbox Log" that records everything the system did. Each classification is documented: what came in, how it was classified, where it was filed. This is fundamental for maintaining trust in the system.
The most important mechanism for system quality. When the AI classifies a thought, it assigns a confidence score between 0 and 1. If the score is below the threshold (e.g., 0.6), the "bouncer" prevents the item from entering main storage. Instead, it logs it in the Inbox Log with "needs review" status and sends a message asking for clarification.
Key principle: when in doubt, don't file incorrectly — ask for review. This prevents the system from filling up with garbage.
The system doesn't wait for you to search for information — it brings it to you. Daily and weekly digests sent automatically with relevant information: upcoming meetings, stalled projects, forgotten ideas.
A mechanism to correct AI mistakes through simple chat commands. If something was misclassified, a command reclassifies it without needing to open the database.
The user should only do one thing: capture thoughts in one place. Automation handles classification, filing, surfacing, and correction.
Keep distinct:
This separation allows changing any component without affecting the others.
Use strict JSON schemas for AI prompts. The goal is to get structured, predictable data, not creative writing. This is essentially what in software engineering we call "contract-first design" applied to prompts.
The Inbox Log and confidence scores aren't optional — they're the foundation of user trust in the system. Without auditing, the system becomes a black box.
If the AI isn't sure, it should log the item for review rather than filing it incorrectly. Better a false negative (item pending review) than a false positive (misclassified item).
Jones didn't stop at the video. His Substack documents the framework's evolution across four key posts that deepen and extend the original architecture.
The most important post in the series. Jones identifies that the real AI bottleneck isn't prompts — it's memory. Every time you open a new chat, the AI starts from zero. Every tool switch costs minutes re-explaining context that should already be there.
The solution: Open Brain — a Postgres database connected via MCP (Model Context Protocol) that any AI can query. Claude, ChatGPT, Cursor — all read and write to the same knowledge base through a single open protocol. No SaaS middlemen, no per-tool silos. Cost: $0.10 to $0.30 per month.
The architecture is deliberately simple:
The kit includes four prompts to bootstrap: migration of existing memories, generation of personalized first 20 captures, quick capture templates, and a weekly review that synthesizes patterns and resurfaces forgotten threads.
After the original video, 50 people built the system with completely different tools than recommended — and it worked just the same. Four principles emerged:
Jones also expanded from 8 to 12 design principles, from "reduce the human's job to one reliable behavior" to "design for restart, not perfection."
The most recent post introduces the concept of a shared surface with two doors: the agent enters through one door, the human through the other. Both read and write the same data, each doing what it does best.
Six concrete extensions on Open Brain:
Four design principles for generating your own use cases:
Jones distinguishes three interaction modes with the same database:
Claude, ChatGPT, and other clients are simply different interfaces to the same database. Understanding when to use each mode changes everything.
In an earlier post (July 2025), Jones framed the fundamental problem: while 73% of US companies already use AI, only 8% consider their implementations mature (McKinsey). It's not a technology problem — it's a practical integration problem.
His critique of the productivity ecosystem: the world splits between the "hype machine" (articles about how AI will revolutionize everything) and the "feature comparison industrial complex" (Notion vs Obsidian vs Mem.ai). What's missing is the bridge — structured guidance that takes you from "I have these tools" to "these tools save me 5 hours per week."
This framework validates the direction of jonmatum.com as a second brain. The blocks we already have (MDX capture, type-based classification, knowledge graph, llms.txt) cover blocks 1-4. What's missing — and what Jones emphasizes as the differentiator — is proactive surfacing (block 7) and the conversational interface (blocks 6 and 8). Exactly what we outlined in issue #12.
Jones' evolution toward Open Brain with MCP is particularly relevant: jonmatum.com already exposes /llms.txt and /llms-full.txt as agent-friendly interfaces. The natural next step is an MCP server that exposes the knowledge graph and embeddings as tools any agent can query — aligned with phase 3 of issue #12.
The two-door principle also applies directly: the website is the human door, the APIs and llms.txt are the agent door. Both access the same knowledge graph.
Field of computer science dedicated to creating systems capable of performing tasks that normally require human intelligence, from reasoning and perception to language generation.
Autonomous systems that combine language models with reasoning, memory, and tool use to execute complex multi-step tasks with minimal human intervention.
The discipline of designing effective instructions for language models, combining clarity, structure, and examples to obtain consistent, high-quality responses.