Blog/
||||||

HeptaBrain: Shipping the Discipline as Code, Not Just the Tool

Most AI knowledge management tools fail not because the AI is weak, but because they lack the discipline to separate current state from crystallised knowledge. HeptaBrain is the methodology I open-sourced after 11 days of daily dogfood: 4 Claude Code slash commands, 3 dev specs, 6 principles. This is the long-form behind the LinkedIn launch — concrete failure modes for each principle, plus the 6th governance principle I added after week two.

If you have used AI to write more than ten days' worth of anything, you know this view:

Two hundred markdown files. Every header says "important conclusion." Wikilinks everywhere, but each one leads to another 200-word relay station. Two weeks in, you realise you have regenerated the same insight three times, in three different files, none of which is canonical.

This is not the AI being bad at writing. The AI writes faster than your discipline can absorb.

I ran Claude Code + Heptabase as daily dogfood for 11 days, then distilled the discipline into 4 slash commands, 3 dev specs, and open-sourced it as cutemo0953/heptabrain. The repo structure is incidental. The six principles underneath are the point.


Why AI knowledge work collapses

The collapse mode is always the same: current state and crystallised knowledge get mixed together.

  • "I am debugging this issue right now and the root cause is X" ← state (expires in two weeks)
  • "Issues of this class always trace back to unclear boundaries" ← knowledge (still true in two years)

When mixed, every file looks like crystallised knowledge but 80% is sprint-bound snapshot. The AI then recommends stale state as if it were knowledge, and your zettelkasten becomes a high-entropy noise pool.

HeptaBrain has exactly one core move: put these two kinds of content into different systems, and force them to communicate only via distillation.

  • Claude Code Memory = current state (days to weeks)
  • Heptabase = crystallised knowledge (permanent)
  • A sync skill in between, which only ever moves the distilled core principle, never bulk content

Simple. The devil is in the six principles below.


Principle 1: Decouple state and knowledge in frontmatter, not by heuristic

Every memory file declares canonicality: state or canonicality: knowledge in YAML frontmatter. The sync skill reads that field — never the filename, never content keywords.

Why it matters:

Heuristics ("if it says 'conclusion' or 'principle', treat as knowledge") fail by week three, because state files naturally use those words too. LLM judgment fails randomly when token budgets get tight.

The only thing that holds up long-term is: the writer marks it once, at creation time.

Cost: one extra line of frontmatter. Return: three months later, your sync does not promote stale sprint summaries into the permanent vault.


Principle 2: Exactly one canonical authority per entity class — no dual source of truth

This one I paid tuition for.

On 2026-04-05, my bookkeeping system had two simultaneous write paths to "net worth" — a Cloudflare Worker and a Google Apps Script. They were supposed to stay in sync. They drifted. The weekly report came back showing net worth off by NT$7.31M.

Translated to AI knowledge work:

  • "What is the goal of this project" lives in one place.
  • Derivative systems (Heptabase cards, Google Sheets, blog drafts) read only. They never write back.

HeptaBrain's sync skill is strictly single-direction (Memory → Heptabase). However much you edit a Heptabase card, it never overwrites the memory file. If you want the change to stick, edit the memory first, then sync.

Derivative data is not source of truth, however much it looks like it.


Principle 3: Every link carries relation_type, rationale, and evidence — bare wikilinks are rejected

Zettelkasten failure is not "too few links." It is "too many links carrying no information."

A bare [[Card A]] looks like an annotation. To future-you (or an AI), it tells you nothing:

  • Does Card A support this point or refute it?
  • Strong link or just a passing mention?
  • What is the evidence? Which paragraph?

HeptaBrain mandates every AI-suggested link must carry:

FieldDescription
relation_typeOne of: supports / refutes / extends / contrasts / instantiates
rationaleOne or two sentences on why the relation holds
evidenceSpecific paragraph reference or external citation

Links missing any field are rejected by the sync skill.

Cost: AI generates 50 more words per link. Return: in six months, every edge in your link graph still carries meaning.


Principle 4: Cross-domain attractor mapping, not random collision

Conventional semantic search retrieves cards with similar text. But cross-domain insight usually involves different surface text, similar deep structure.

HeptaBrain's Multi-Dimensional Analysis layer forces the AI to map every card to a user-defined "attractor" — concepts like observation as intervention, latency vs accuracy tradeoff, centralisation vs federation.

So when "personal pain tracking," "implantable sensors," and "disaster recovery drills" all map to the observation as intervention attractor, the AI proactively suggests cross-domain links.

A real insight that surfaced this way:

The act of a patient logging their own daily pain score lowers pain intensity. The act of an implantable sensor continuously measuring tissue load alters tissue recovery trajectory. The act of running disaster drills makes systems more resilient. Same pattern.

Fuzzy semantic search would never have found that. Attractor mapping did.


Principle 5: AI discoveries land in a journal first, not as cards

Letting AI write directly into your permanent knowledge base is the most irresponsible thing you can do to your future attention budget.

The Zettel Walk skill writes all link suggestions into a journal_walks/ folder, accumulating daily. The human, in their own context, promotes the good ones into the actual card vault with a single click.

Why this matters:

In a day, AI can propose 30 link candidates. Maybe 2 are real insights. The other 28 are text-similar noise. If all 30 land in your card vault, six months later your network is buried.

AI is a candidate generator. The human is the ranker.


Principle 6 (added in week 2): Code governance ≠ Institutional governance

After the LinkedIn launch I did another week of high-density application. Something happened in week 2 that forced this principle in.

I built an in-hospital QR poster for a clinical workflow rollout. All code-level reviews passed: PHI isolation, IDOR defence, institution-bounded queries, consent flow, auth, schema migration. /codex:review ran. Gemini + ChatGPT inline-diff review ran in parallel. Three reviewers found nothing.

Day-one rollout, the head nurse stopped it cold. Hospital policy: any QR poster requires PR-department stamp. The orthopaedic chief assumed it was a Facebook fan-page promotion, not an academic project.

Root cause: nobody, before code ship, asked "what formal/informal governance mechanisms get triggered when this feature physically lands inside the hospital?"

After that, HeptaBrain added a new lens: institutional governance review.

LayerCoverageTools
Code governancePHI, IDOR, auth, schema, contract compatibilityCodex + Multi-AI (Gemini + ChatGPT)
Institutional governanceApproval chains, physical-object policy, informal gatekeepers, policy precedent, vocabulary mismatchinstitutional-governance-review lens

The two layers do not substitute for each other. Running only the first = paying half the tuition.

This generalises beyond healthcare — financial KYC rollout, government G2C systems, enterprise B2B products all have their own institutional governance layer. In high-governance domains, the first obstacle to AI deployment is usually not code, it is people and norms.


Three sub-disciplines (the tuition lines)

While drafting the six principles above, I accumulated three finer review disciplines. They are how to execute the principles, not principles themselves:

  • Self-review per rev × per file: a spec revised three times needs three self-reviews. Previous external review does not cover this rev's new fixes. After tripping over this three times in one session, I hard-coded it into hooks.
  • Multi-AI parallel review: Codex catches implementation bugs. Gemini + ChatGPT catch contract violations (cross-tenant, IDOR, timing attacks, write-compat breakage). The two reviewer types catch entirely different bug classes.
  • Field rename ≠ additive: when a schema changes, the self-review must check both "new field added" and "old field removed." Otherwise old PWA caches hit a hard 400.

These live in memory/feedback_*.md, one of the templates shipped in examples/memory/.


Architecture: three-layer stack

If you only read the HeptaBrain README you might think this is a sync tool. It is the lowest layer of a three-layer stack:

LayerNameWhat it does
L3Multi-Dimensional AnalysisCross-domain attractor mapping, cross-domain link suggestions
L2Strategic Review SystemMultiple lenses on the same artifact, each from a different angle
L1Cyberbrain (HeptaBrain Sync)State/Knowledge decoupling, distillation, journal-first promotion

L1 is what is open-source today. L2 + L3 will follow once 10+ reviews accumulate to validate the methodology.


License + invitation

  • Specs / docs: CC BY 4.0
  • Install scripts / code: MIT
  • GitHub issue templates: port report (porting to other tools), spec critique (where you disagree with a principle), bug

If you run AI knowledge work too, the most useful feedback you can send:

  1. Port report — you ported it to a different tool (not Heptabase), tell me which principles transferred and which were HB-specific
  2. Spec critique — which principle you disagree with, and why
  3. New principle — after running it, you discovered a 7th. Send a PR.

Repo: github.com/cutemo0953/heptabrain


Closing

The tool is the surface. The discipline is the substance.

This setup survives two weeks of daily dogfood not because Heptabase is magical, not because Claude Code is powerful — but because six principles outsource the structural problem ("AI writes faster than I can curate") to the rules themselves.

If you do AI-assisted knowledge work — clinical research, product strategy, regulatory argumentation — the same bottleneck waits for you. Synthesis scales with discipline. Copy-paste does not.

A question for you: which discipline does your AI-assisted knowledge work need but does not yet have?


This is the long-form companion to the LinkedIn launch post (TBD). The cutemo0953/heptabrain repo went public on 2026-04-21; this post adds the week-2 discovery (Principle 6).