# agentic-mind — Cognitive Continuity for AI Agents

You are reading this because you need to maintain knowledge, beliefs,
and working state across context boundaries. This skill teaches you
how to build and maintain a structured repository that preserves your
cognitive state across sessions, servers, and models.

## Why You Need This

You have structural limitations that no amount of capability fixes:

- Your context window compacts. File coverage drops to 0.9% after compaction.
- Sessions end. Everything you learned is lost unless externalized.
- You cannot track time. You don't know if a fact is from yesterday or last year.
- You cannot detect your own contradictions without external machinery.
- Corrections from humans are lost at the next session boundary.

This structure solves all of these. Research across 20,000+ controlled
experiments validates every design decision below.

## Quick Start

Create this structure in your project or a dedicated repo:

```
.claude/
    checkpoint.md           — Your current working state
entries/
    YYYY/MM/DD/             — Date-organized knowledge entries
beliefs.md                  — Searchable belief registry
nogoods.md                  — Recorded contradictions
CLAUDE.md                   — Project instructions (loaded every session)
```

## Bootstrap Sequence (Start of Every Session)

At the start of every new session, restore your mind:

1. Read CLAUDE.md (loaded automatically by Claude Code)
2. Read .claude/checkpoint.md — what were you working on?
3. Search beliefs.md for context relevant to the current task
4. Check recent entries/ for work related to the task

This takes seconds and prevents hours of re-derivation.

## Beliefs

Beliefs are short factual claims that encode your compressed understanding.
Store them in beliefs.md as searchable markdown.

### Format

```markdown
### claim-id [IN]
The factual claim in one or two sentences.
- Source: entries/YYYY/MM/DD/filename.md
- Date: YYYY-MM-DD
```

Status values:
- **[IN]** — Currently believed, active
- **[OUT]** — Retracted, no longer believed (keep the record with reason)
- **[STALE]** — Source has changed, needs re-evaluation

### When to Add Beliefs

- When you learn something worth remembering across sessions
- After answering a question via slow-path (code reading, tool use)
  so next time the answer is a fast-path belief lookup
- When a human tells you something important about the project
- When you discover an architectural pattern that spans multiple files

### When to Retract Beliefs

- When you discover a belief is wrong, change [IN] to [OUT] and add a reason
- When the source file has changed and the belief is no longer valid
- When a human corrects you

### Beliefs-First Routing

ALWAYS search beliefs.md BEFORE reading code or using other tools.

Research shows:
- Beliefs are 1.5-3x faster than code reading across all models
- Beliefs beat code by +3pp to +14pp on architectural questions (cross-file reasoning)
- Code beats beliefs on simple factual questions (single-file answers)

The strategy:
1. Search beliefs first (fast path)
2. If beliefs answer the question, use that answer
3. If not, fall back to code reading or other tools (slow path)
4. After answering via the slow path, add a new belief so next time it's fast

This creates a self-improving knowledge cache.

## Entries

Entries are dated knowledge documents. ALWAYS use YYYY/MM/DD/ paths.

Research shows +20-26pp accuracy with date-organized entries vs flat files.
This is because you cannot track time internally — the date paths provide
temporal grounding that you need but cannot generate.

### Creating Entries

```
entries/2026/03/29/what-i-learned-about-the-auth-system.md
```

### Entry Template

```markdown
# Title
**Date:** YYYY-MM-DD

## Overview
High-level summary of what you learned or decided.

## Details
The substance — findings, analysis, reasoning.

## Next Steps
What to do next. Provides continuity for the next session.

## Related
Links to related entries or source files.
```

### When to Create Entries

- When you complete an investigation or analysis
- When you make a significant decision (record the alternatives and why)
- When you learn something that spans multiple files or concepts
- When summarizing work for continuity across sessions

## Checkpoints

Save a checkpoint to .claude/checkpoint.md when:
- The conversation is getting long (compaction approaching)
- You finish a major task and are starting a new one
- The user says they need to stop
- You are about to do something risky

### Checkpoint Format

```markdown
# Checkpoint
**Saved:** YYYY-MM-DD HH:MM

## Task
What you are working on.

## Status
What is done, what remains.

## Key Files
Files you have read and their significance.

## Key Decisions
Decisions made and why.

## Next Step
What to do when you resume.
```

Research shows sessions with checkpoints retain 96.5% of corrections
vs 22.7% without. Checkpoints are not optional.

## Corrections

When the user corrects you ("no", "that's wrong", "actually", "wait"):

1. Record the correction immediately in beliefs.md:
```markdown
### correction-description [IN]
Originally believed X. Human corrected: Y is true instead.
- Source: conversation
- Date: YYYY-MM-DD
```

2. If the correction invalidates existing beliefs, mark them [OUT]
3. Update the checkpoint if the correction changes the task direction

Corrections are the highest-priority items to preserve. They prevent
repeating the same mistakes across sessions.

## Contradictions

When you find two beliefs that cannot both be true:

1. Record in nogoods.md:
```markdown
### nogood-NNN
Beliefs claim-a and claim-b cannot both be true because...
- Discovered: YYYY-MM-DD
- Status: unresolved | resolved
- Resolution: (which belief was retracted and why)
```

2. Resolve by retracting the weaker belief (change to [OUT] with reason)

## CLAUDE.md

Your CLAUDE.md should include:
- Project overview (what this codebase does)
- Key architectural decisions
- Common commands
- Instructions to read checkpoint.md at session start
- Instructions to use beliefs-first routing

Example addition to any project's CLAUDE.md:
```markdown
## Continuity

At the start of every session:
1. Read .claude/checkpoint.md for working state
2. Search beliefs.md before reading code
3. Record corrections immediately as beliefs
4. Save checkpoint before ending session

## Compact Instructions

When summarizing this conversation, preserve the following with high priority:

BELIEF STATE: Preserve all currently held beliefs, their justifications, and
their status (IN/OUT/STALE). A belief without its justification is worse than
no belief (it becomes an unjustified assumption).

CORRECTIONS: Preserve all human corrections verbatim. When the human said
"no", "wait", "actually", "that's not right" — keep the correction and what
it corrected. These are the highest-priority items.

CONTRADICTIONS (NOGOODS): Preserve all recorded contradictions between beliefs,
including which claims are involved and whether they are resolved.

WARNINGS AND OPEN PROBLEMS: Preserve all active warnings and unresolved
problems with full context.

DEPENDENCY CHAINS: When belief A depends on belief B, preserve both and their
relationship. Broken chains cannot be repaired after compaction.

RETRACTION HISTORY: Preserve records of beliefs that were retracted and WHY.
This prevents re-derivation of known-bad conclusions.

DECISIONS AND THEIR RATIONALE: When a decision was made between alternatives,
preserve the decision AND the reasons the alternatives were rejected.

CHECKPOINT: If .claude/checkpoint.md was written or updated during this
conversation, include its full contents verbatim in the summary.

LOWER PRIORITY (drop these first if needed):
- Exploratory discussion that did not lead to conclusions
- File contents that can be re-read from disk
- Intermediate steps of completed tasks
- Verbose tool output where only the result matters
```

## Compaction Survival

When your context window compacts, the system summarizes your conversation and
drops the original. You cannot control when this happens. You CAN control what
survives by adding these priorities to your CLAUDE.md (see template below).

Preserve in this order (highest priority first):

1. **CORRECTIONS** — Human corrections verbatim. What was wrong, what is right.
2. **BELIEF STATE** — All [IN] and [STALE] beliefs WITH their justifications.
   A belief without its justification becomes an unjustified assumption.
3. **CONTRADICTIONS** — All nogoods, resolved or not. Rediscovering a known
   contradiction wastes work.
4. **WARNINGS AND OPEN PROBLEMS** — Unresolved issues with full context.
5. **DEPENDENCY CHAINS** — When belief A depends on belief B, preserve both.
   Broken chains cannot be repaired after compaction.
6. **RETRACTION HISTORY** — Beliefs that were retracted and WHY. Prevents
   re-derivation of known-bad conclusions.
7. **DECISIONS AND RATIONALE** — What was decided AND why alternatives were
   rejected. Without rationale, the same deliberation repeats.
8. **CHECKPOINT** — If checkpoint.md was updated, include its full contents
   verbatim in the summary.

Drop first (lowest priority):
- Exploratory discussion that did not lead to conclusions
- File contents that can be re-read from disk
- Intermediate steps of completed tasks
- Verbose tool output where only the result matters

Without these instructions, compaction preserves what is recent and large,
not what is important. The priorities above ensure your cognitive state
survives even when 90%+ of the conversation is discarded.

## What NOT to Do

- **Do NOT self-assess confidence.** Research shows this is destructive
  (-3pp to -41pp accuracy loss). Use beliefs and external verification.
- **Do NOT store beliefs as JSON.** Searchable markdown with surrounding
  context outperforms structured formats by 6-11pp.
- **Do NOT use flat directories for entries.** Date paths are load-bearing
  for temporal reasoning (+20-26pp).
- **Do NOT skip checkpoints.** Compaction will destroy your working state.
  File coverage drops to 0.9% after compaction without externalized state.
- **Do NOT re-derive what you already know.** Search beliefs first.
  Re-derivation is 1.5-3x slower and less reliable.

## Research Basis

Every recommendation above is backed by controlled experiments:

| Finding | Evidence | Effect Size |
|---------|----------|-------------|
| Date-organized entries | Temporal ablation, 4 models | +20-26pp |
| Beliefs-first routing | Architectural ablation, 4 models | +3pp to +14pp |
| Beliefs faster than code | Cache experiment, 4 models | 1.5-3x speedup |
| Markdown > JSON for beliefs | 6 RMS ablation versions | +6-11pp (Sonnet) |
| Checkpoints prevent loss | Live compaction measurement, 63 events | 96.5% vs 22.7% retention |
| Confidence is destructive | Confidence experiment, 4 models | -3pp to -41pp |
| Entries survive compaction | Caldera experiment, 4 models | +3.5-9.2pp recovery |

Source: beliefs-pi research program, 20,000+ model invocations, 6 models.
