Most note-taking systems die for the same reason: maintenance.
You start strong. New folder, fresh Obsidian vault, color-coded tags, a clean PARA structure. Three weeks in, your inbox is full of unfiled notes. Three months in, you're using Apple Notes again because at least it doesn't make you feel guilty.
The reason isn't laziness. It's that "linking 50 notes to each other every time you read a paper" is genuinely boring work that humans don't want to do. The wiki dies when the maintainer gets tired.
Andrej Karpathy posted a gist in early 2026 that fixes the problem at the root: let an LLM maintain it. Not "use an LLM to query your notes." Let it actually own the wiki, write the pages, link the concepts, lint the contradictions, and keep the thing alive while you're asleep.
He calls it the LLM Wiki. It's the cleanest second-brain pattern I've seen in 10 years of failed Notion vaults. This guide walks you through what it is, why it works, and how to wire it up in Obsidian + Claude Code in an afternoon.
Why every other note-taking system breaks
The pattern is always the same:
Every chat starts from zero. Every insight evaporates when the session closes. You're paying the same context tax every time.
Translated: when you ask ChatGPT or Claude the same question next month, it re-derives the answer from scratch. The 90 minutes you spent thinking about it last time? Gone. The five sources you read? Gone. The mental model you built? Gone.
That's goldfish memory. And that's how 99% of people use AI in 2026: as a smarter Google that forgets you the second you close the tab.
What you actually want is the opposite shape:
A persistent, compounding research artifact. Synthesis happens once, then gets updated instead of recomputed from scratch on every question.
That's Karpathy's framing. Read it twice. The whole method falls out of that one idea.
The 3-layer architecture
Karpathy's vault has three layers. They're so cleanly separated that even his linter can tell when you've broken the rule.
Layer 1: Sources (raw, immutable)
"Raw sources — immutable documents (articles, papers, images, data files). The LLM reads but never edits." — Karpathy
Everything you collect goes here, untouched. PDFs, web clippings, podcast transcripts, customer call notes, screenshots, your own raw thoughts. The rule is simple: once it lands in sources/, you don't edit it. Not even for typos. This is your immutable corpus, the source of truth.
In a real sources/ folder you might see:
sources/
├── 2026-04-acme-call.txt
├── customer-survey-q1.csv
├── pitch-deck-v3.pdf
├── competitor-pricing.html
└── onboarding-feedback.md
Why immutable? Because if the LLM ever rewrites a source, you lose the ability to audit what your conclusions are based on. Sources are the bedrock. Everything else is interpretation.
Layer 2: The wiki (AI-owned)
*"The wiki is LLM-generated markdown files with summaries, entity pages, concepts, and cross-references. You read it; the LLM writes it."* — Karpathy
This is the magic layer. The LLM writes here. You don't.
Account profiles. Concept pages. Weekly recaps. Summaries that pull from 12 sources. Cross-links between everything. The agent maintains it the way a good Wikipedia editor would, except it never gets tired and never abandons a page halfway through.
wiki/
├── acme-account.md
├── pricing-strategy.md
├── competitor-map.md
├── 2026-q1-recap.md
└── stripe-vs-paddle.md
The mental flip that makes this work: stop thinking of these as "my notes." They're notes the AI keeps for you. You direct, you ask questions, you decide what to research. The agent does the editorial work (extracting, summarizing, cross-linking, deduplicating, updating).
A single new source dropped into Layer 1 might trigger writes to 10–15 pages in Layer 2. That's the compounding part. Every source you ingest makes the wiki smarter, not just larger.
Layer 3: The schema (CLAUDE.md)
"The schema is a configuration document specifying structure, conventions, and workflows. This makes the LLM a disciplined wiki maintainer rather than a generic chatbot." — Karpathy
This is where Claude Code's CLAUDE.md earns its keep. It tells the agent how the vault is organized and which workflows are available.
# LLM Wiki schema
## Layers
- `sources/` — raw, never edit. Cite by filename in frontmatter.
- `wiki/` — AI-maintained. Backlink every concept. Summary at top.
- `log.md` — append-only session log. Newest entries first.
## Workflows
- /capture <url> — clip + ingest a URL into sources/, then update wiki/
- /lint — health check: broken links, orphan pages, contradictions
- /sync — refresh wiki/ pages from any new sources/
- /digest — generate this week's recap from log.md
## Rules for wiki/ pages
- Frontmatter: title, summary, sources, last_updated
- Backlink every concept page from at least one parent
- Cite source filenames inline as [[source: filename.md]]
- Run /lint after every batch write
That CLAUDE.md is what turns Claude Code from a chatbot into a wiki maintainer. Drop it in your vault root and every session starts knowing the rules.
The four operations that run the system
Karpathy's gist describes the daily flow as four operations. These map cleanly onto Claude Code skills / slash commands: /capture, /sync, /lint, /digest. Each one is a 30-line markdown file at .claude/skills/<name>/SKILL.md.
/capture <url>: Ingest
Drop a URL, paper, or text dump. The skill clips it into sources/, reads it, and updates every relevant wiki page.
"Process new sources one at a time. The LLM reads, discusses takeaways, writes summaries, updates relevant pages, and appends to a log. A single source might touch 10–15 wiki pages."
This is the operation that makes the vault compound. Every capture potentially links into a dozen existing pages.
/sync: Reconcile
Sometimes you dump a batch of sources at once. /sync walks every file in sources/ newer than the last sync, finds wiki pages it should affect, and reconciles them.
/lint: Health check
"Periodically health-check for contradictions, stale claims, orphan pages, and data gaps."
This is the killer feature. Run /lint once a week and Claude reports:
- Pages with no incoming links (probably should be merged or deleted)
- Pages whose sources have been updated but the page hasn't
- Concepts described differently across two pages (contradictions to resolve)
- Topics with thin coverage that you should research more
Try doing that in Notion by hand. You won't, because nobody does. That's why your wiki dies.
/digest: Synthesis
The Sunday-evening operation. Read the week's log entries, write a recap, link to the new pages, surface the patterns you didn't notice in the moment. This is where the second brain becomes a thinking second brain.
Why this beats RAG
Most "chat with your notes" tools are RAG systems: they retrieve relevant chunks at query time, stuff them into the context, and generate an answer. It works, kind of. It also breaks the second your corpus passes a few hundred pages or your questions get genuinely conceptual.
Karpathy's pitch:
"Instead of just retrieving from raw documents at query time, the LLM incrementally builds and maintains a persistent wiki."
Three things change once the wiki is the artifact:
- Synthesis is precomputed. The "what's our pricing strategy?" page already exists, written carefully, last updated yesterday. You don't re-synthesize on every query.
- Cross-references are explicit. When you ask about Acme, the page links to the contract terms, the call notes, the pricing comparison, and the competitor map. All of which live in the wiki, not in some embedding space.
- You can read it. Your "second brain" is now actually inspectable. You can browse it. Print it. Audit what the LLM concluded vs. what the sources actually say.
It's the difference between "I have a search engine over my files" and "I have a research artifact that compounds."
Setting it up in Obsidian + Claude Code
Step by step, in order. Should take about 90 minutes the first time.
1. Create the vault structure
mkdir -p ~/notes/{sources,wiki}
cd ~/notes
touch CLAUDE.md log.md index.md
Open the folder in Obsidian as a new vault.
2. Drop in the schema
Save Karpathy's CLAUDE.md template (or my expanded version above) at the vault root. This is the single most important file. Spend 20 minutes tightening it to your domain. Startup founder vs. researcher vs. consultant will write very different rules.
3. Add the skills
Create .claude/skills/ and add four SKILL.md files: capture, sync, lint, digest. The bones of /capture look like this:
---
description: Clip a URL or file path into sources/, then update affected wiki pages. Use when the user gives you a link to ingest or says "add this to the vault."
---
## Step 1: Save to sources/
- Fetch the URL or read the file path provided
- Save the cleaned content to `sources/YYYY-MM-DD-<slug>.md`
- Add frontmatter: source_url, ingested_at, content_type
## Step 2: Find affected wiki pages
- Read `index.md` to see all wiki pages
- Identify pages where this source adds new info, contradicts existing claims, or fills a gap
## Step 3: Update in batches
- Edit each affected wiki page with the new info
- Add a backlink: `[[source: 2026-05-02-foo.md]]`
- Update `last_updated` in the page frontmatter
## Step 4: Append to log
- Add an entry to `log.md`: timestamp, source, pages touched, key takeaway
The other three follow the same pattern. The full templates fit on one screen each.
4. Capture your first source
Open Claude Code in the vault directory. Run:
/capture https://gist.github.com/karpathy/442a6bf555914893e9891c11519de94f
Watch what happens. Claude clips the gist into sources/, reads it, looks at your (mostly empty) wiki, and likely creates 3–4 starter pages: an Andrej Karpathy entity page, an LLM Wiki concept page, a Vannevar Bush / Memex page if it caught the historical reference, and an entry in log.md. From source #1, you've got a network.
5. Run /lint
Run it now. It won't find much (your wiki has 4 pages). Run it again after you've ingested 50 sources and watch it surface contradictions you didn't realize you were holding.
6. Set up a weekly digest
Use Claude Code's Routines (/schedule) to run /digest every Sunday at 6pm. By the time you sit down for the week ahead, your weekly synthesis is waiting in wiki/2026-w19-recap.md.
What it actually grows into
To make this concrete: Karpathy's vault, built on this method, became a research artifact of around 100 articles and 400,000 words, all written and maintained by the agent. That's roughly the size of a textbook, generated incidentally as he went about his day.
I can't promise you 400k words. I can promise you that after 30 days of /capture-ing what you read, you'll have something you've never had before: a notes system that's getting better while you sleep.
The reason every previous productivity system you tried died is the same reason this one survives. The thing that always killed knowledge bases (who maintains it?) is no longer your job.
A few honest gotchas
Five things I wish someone had told me before week one:
- Don't migrate your existing Notion / Obsidian vault on day one. Start fresh. Migrate sources gradually as you reference them. Otherwise you spend a week porting and the system never gets used.
- Resist editing the wiki yourself. The whole point is the AI owns Layer 2. If you start editing pages, you'll erode the contract, and the agent won't trust its own work next session. If a page is wrong, ingest a new source that contradicts it and let
/syncreconcile. - Set token budget expectations. Each
/captureof a long source can be 5–15k tokens. Either run it on a Claude Code routine (cloud) or budget for steady local usage. - Use
/lintaggressively. Weekly, minimum. Monthly is too sparse, and contradictions compound. - Don't index everything. Be a curator. Bad sources poison the wiki. The human's job is "pick what's worth thinking about." The LLM's job is everything else.
Want a workforce that runs on this same pattern?
The LLM Wiki is one tile in a bigger picture. Once you've got an AI that maintains your knowledge base, the next obvious step is an AI that does the work the knowledge base is for: drafting proposals from your account pages, scanning your sources for fresh trends, answering DMs in your voice, syncing your meetings into the vault automatically.
That's what the Glitch Workforce is. It's an open-source, native AI workforce that runs on your own Mac. You hire employees by chatting with Professor Glitch on Telegram, and each one runs on Claude Code under the hood, using the same CLAUDE.md / Skills / Routines stack the LLM Wiki uses.
If you want the structured path through all of this (second brain, AI workforce, automation), join the community. Inside there are hands-on labs, the Thinker's Mind episodes on how to think strategically about AI, and 130+ builders shipping real systems together.
If you just want free daily breakdowns, follow @pro.glitch on TikTok.
The vault you start today is the one you'll wish you'd started two years ago. Capture one URL today. The compounding starts there.

