Clawctl
Use Case
10 min

Your AI Agent Has the Memory of a Goldfish. Here's How to Fix It.

Most AI assistants forget everything the moment you close the chat. You keep repeating yourself. There's a better way—PARA, atomic facts, and a search layer that actually works.

Clawctl Team

Product & Engineering

Your AI Agent Has the Memory of a Goldfish. Here's How to Fix It.

Every conversation starts fresh.

"Who do I work with?"

"What are we building?"

"How do I like my code formatted?"

You repeat yourself. Every. Single. Time.

Some platforms offer "memory" features. A flat list of random facts. No structure. No decay. No hierarchy.

That's not memory. That's a sticky note collection.

If you're running an AI agent as your personal chief of staff—not a chatbot, an actual staff member—you need real memory architecture.

Here's the three-layer system that actually works.

The Problem Is Bigger Than You Think

Most AI memory is:

  • Flat — A bullet list of facts, no organization
  • Static — No concept of what's important right now
  • Dumb — No connections between related things
  • Expensive — Loads everything into context every time

You wouldn't run a company this way. Why run your AI this way?

Human memory has layers. Long-term facts you know. Episodes of what happened. Procedural knowledge of how you work.

Your AI should too.

The Three-Layer Architecture

LayerWhat It StoresHow Often It Changes
Knowledge GraphEntities and facts (people, companies, projects)Continuously
Daily NotesRaw timeline of eventsEvery conversation
Tacit KnowledgeYour patterns and preferencesRarely

Think of it like your own brain:

  • Knowledge Graph = Long-term declarative memory (facts you know)
  • Daily Notes = Episodic memory (what happened when)
  • Tacit Knowledge = Procedural memory (how you operate)

Three layers. Three purposes. One system.

Layer 1: The Knowledge Graph (PARA)

This is the backbone. Built on Tiago Forte's PARA method—four buckets that cover everything in your life:

life/
├── projects/          # Active work with clear goals
│   └── product-launch/
│       ├── summary.md
│       └── items.json
├── areas/             # Ongoing responsibilities (no end date)
│   ├── people/jane/
│   └── companies/acme/
├── resources/         # Topics of interest, reference
│   └── kubernetes/
└── archives/          # Inactive items

Why PARA?

Because everything fits in exactly one bucket:

  • Projects — Active work with a deadline. When it's done, it moves to Archives.
  • Areas — Ongoing responsibilities. People you know, companies you work with. These persist.
  • Resources — Reference material. Things you might need later.
  • Archives — Inactive items from any of the above. Nothing gets deleted—just moved here.

Entities flow naturally between buckets over time. A project becomes an archive. A resource becomes a project. The system breathes.

Tiered Retrieval (This Is Key)

Every entity gets two files:

  • summary.md — Concise overview. Agent loads this first.
  • items.json — Array of atomic facts. Only loaded when needed.

Result: Context windows stay lean. Most conversations only need the summary. The agent dives deeper only when necessary.

The Atomic Fact Schema

{
  "id": "jane-001",
  "fact": "Joined Acme as CTO in March 2025",
  "category": "milestone",
  "timestamp": "2025-03-15",
  "status": "active",
  "supersededBy": null,
  "relatedEntities": ["companies/acme"],
  "lastAccessed": "2026-01-28",
  "accessCount": 12
}

The important fields:

  • statusactive or superseded. Facts are never deleted.
  • supersededBy — When a fact is outdated, it points to the replacement.
  • relatedEntities — Cross-references. This is what makes it a graph.
  • lastAccessed / accessCount — Used for memory decay.

The No-Deletion Rule

Critical: Facts are never deleted.

When something changes, the old fact is superseded and a new one is created. You always have full history. You can trace how a relationship evolved, when a project's scope changed, when someone switched roles.

The supersededBy pointer creates a chain you can follow forward through time.

Layer 2: Daily Notes

memory/
├── 2026-01-28.md
├── 2026-01-29.md
├── 2026-01-30.md
└── 2026-01-31.md

The raw timeline. The "when" layer.

Your agent writes to daily notes continuously during conversations. These are messy, chronological, and complete. A work log.

During periodic extraction, durable facts get pulled out and written into the knowledge graph. The daily notes remain as the source-of-truth timeline.

Layer 3: Tacit Knowledge

A single file that captures how you operate:

  • Communication preferences (tools, formats, verbosity)
  • Working style patterns (how you brainstorm, make decisions)
  • Tool preferences and workflows
  • Rules and boundaries the agent should follow

This layer changes slowly. Updated only when the agent notices a new pattern. It's the closest analog to "getting to know someone" over months of working together.

Memory Decay (The Secret Sauce)

Here's where it gets interesting.

A naive system treats all facts equally. But that's not how human memory works. Recent and frequently-accessed information is more available.

The system implements this through recency weighting:

Access Tracking

Every time a fact is used:

  1. accessCount gets incremented
  2. lastAccessed gets set to today

This creates a usage signal independent of when the fact was created.

Recency Tiers

During the weekly summary rewrite, facts are sorted into three tiers:

TierRecencyTreatment
HotAccessed in last 7 daysProminently in summary
WarmAccessed 8–30 days agoStill in summary, lower priority
ColdNot accessed in 30+ daysOmitted from summary (but kept in items.json)

Cold facts aren't deleted. They remain retrievable via search. Accessing a cold fact "reheats" it back to Hot.

Frequency Resistance

Facts with high accessCount resist decay. A fact you reference weekly for six months stays warm even if you skip a few weeks.

Simple system. Three tiers. Frequency as a modifier. Easy to reason about and debug.

Automated Extraction: Heartbeats

You don't manually categorize and file information. That defeats the purpose.

Instead, a heartbeat process runs periodically:

  1. Scans recent conversations for new information
  2. Extracts durable facts (relationships, status changes, milestones, decisions)
  3. Writes facts to the appropriate entity in the knowledge graph
  4. Updates daily notes with timeline entries
  5. Bumps access metadata on referenced facts

The extraction skips casual chat, transient requests, and already-captured information. Focuses on what matters: people, companies, projects, preferences, decisions.

Entity Creation Heuristics

Not every noun deserves its own folder. Simple rules:

  • Create an entity if mentioned 3+ times, has direct relationship to you, or is significant
  • Otherwise, capture in daily notes and leave it there

Prevents the knowledge graph from filling up with one-off mentions.

The Search Layer: QMD

A knowledge base is only as good as your ability to find things.

As entities, daily notes, and facts grow, naive approaches—grepping files, loading everything—stop scaling. You need a proper search layer.

QMD is a local indexing and retrieval tool. It indexes markdown files into SQLite and provides three search modes:

ModeUse Case
Full-text search (BM25)Finding specific facts you know exist
Vector similarity searchFinding related info, fuzzy queries
Combined queryBest of both with reranking

How the Agent Uses It

When the agent needs to recall something, it doesn't load the entire knowledge base. It queries QMD:

# Specific person lookup
qmd search "Jane's role at Acme" -c life

# Fuzzy semantic search
qmd vsearch "that conversation about pricing strategy"

# Combined with reranking
qmd query "when did the project scope change"

QMD returns ranked snippets with file paths and line numbers. The agent loads only what's relevant—a few paragraphs instead of the entire knowledge base.

This is what makes tiered retrieval work at scale.

Keeping the Index Fresh

qmd update          # Re-index all collections
qmd update --pull   # Git pull first
qmd embed           # Rebuild vector embeddings

Runs as part of the heartbeat. After new facts are extracted, the index is refreshed.

Why This Works

A few properties that make this effective:

Separation of concerns. Each layer has a clear job. You don't end up with one massive file trying to be everything.

Graceful degradation. If the heartbeat misses something, it's still in daily notes. If a summary is stale, full facts are in items.json. Always a fallback.

No information loss. Between the no-deletion rule and archives, nothing is truly gone. You can trace back to when something was learned and how it evolved.

Context window efficiency. Tiered retrieval means the agent doesn't blow its context window. Most conversations only need the summary.

Natural lifecycle. Entities flow from Projects → Archives when complete. Facts flow from Hot → Warm → Cold as they age. The system breathes.

Getting Started

If you want to implement this for your own agent:

  1. Start with the directory structure. Create the PARA folders and an index.md. Don't overthink it.

  2. Pick one active project and one important person. Create their summary.md and items.json. Get the schema right on two entities before scaling.

  3. Add daily notes. Just start writing dated markdown files. They don't need to be pretty.

  4. Automate extraction later. Do it manually for the first week to build intuition for what counts as a "durable fact." Then build the heartbeat.

  5. Add decay last. You won't need it until you have enough facts for the summaries to feel bloated. That takes a few months.

The system is deliberately low-tech—markdown files and JSON, backed to a private git repo. No database, no special tooling, no vendor lock-in.

Your AI assistant reads and writes plain files. If you switch assistants tomorrow, your memory comes with you.

The Clawctl Advantage

Building this yourself takes weeks. Getting it secure for production takes longer.

Clawctl gives you:

DIYClawctl
Build PARA structure yourselfPre-configured memory architecture
Implement heartbeat extractionAutomated fact extraction
Build search indexBuilt-in semantic search
Handle memory decay manuallyAutomatic recency tiers
Secure your own filesEncrypted, audited storage
Debug when things breakLogs and replay for every operation

Same architecture. None of the infrastructure headaches.

Your agent remembers. You ship faster.

Deploy an agent that actually remembers →

Read about knowledge management patterns →

Ready to deploy your OpenClaw securely?

Get your OpenClaw running in production with Clawctl's enterprise-grade security.