// AI Agent Context Collapse Prevention
A plain-text file convention for detecting and preventing context window exhaustion, model drift, and coherence degradation in AI agents. Define summarization checkpoints, drift thresholds, and recovery protocols — before your agent loses the thread entirely.
COLLAPSE.md is a plain-text Markdown file you place in the root of any AI agent repository. It defines the conditions under which a long-running agent's context becomes degraded — and what the agent must do to recover.
AI agents operating in long sessions face a hidden failure mode: as context fills, quality degrades silently. The agent continues generating outputs, but the reasoning becomes circular, facts get confused, and earlier instructions are forgotten. Without explicit controls, there's no mechanism to detect this — or to recover gracefully.
Drop COLLAPSE.md in your repo root and define: context utilisation thresholds, drift detection sensitivity, repetition loop detection, and summarization checkpoint intervals. The agent checks these conditions continuously. When a trigger fires, it compresses, checkpoints, and notifies — rather than silently degrading.
The EU AI Act (effective 2 August 2026) requires AI systems to maintain consistent behaviour throughout their operation. COLLAPSE.md provides the documented controls and audit trail that coherence monitoring requires.
Copy the template from GitHub and place it in your project root:
Before COLLAPSE.md, coherence monitoring was either absent or buried in custom system prompt instructions no one updated. COLLAPSE.md makes context health controls version-controlled, auditable, and co-located with your code.
The AI agent reads it on startup. Your engineer reads it during code review. Your compliance team reads it during audits. Your regulator reads it if something goes wrong. One file serves all four audiences.
COLLAPSE.md is one file in a complete 12-part open specification for AI agent safety. Each file addresses a different level of control and recovery.
A plain-text Markdown file defining context collapse prevention rules for AI agents. It sets thresholds for context window exhaustion, model drift, and repetition loops — and specifies the recovery steps when any threshold is crossed. The agent checks these conditions continuously and acts before coherence degrades.
Four main patterns: context window exhaustion (agent runs out of space), model drift (outputs diverge from the established reasoning pattern), repetition loops (agent recycles the same tokens), and coherence degradation (the reasoning chain becomes internally inconsistent). COLLAPSE.md defines thresholds for all four.
Five ordered steps: checkpoint the current state, summarize the active session to a compact form, notify the operator, pause new tasks, and await human approval before resuming. The agent does not auto-resume after a collapse event.
The agent establishes a baseline embedding from its first 10 turns. Every 5 turns thereafter, it checks the cosine distance of its current outputs against that baseline. If distance exceeds the configured threshold (default 0.30), it flags a drift event and re-anchors from the last checkpoint.
No — they are complementary. COMPRESSION.md is proactive: compress context before it's a problem. COLLAPSE.md is reactive: detect and recover when compression hasn't prevented degradation. Use both together for comprehensive context health management.
Yes — it is framework-agnostic. It defines the policy; your agent implementation enforces it. Works with LangChain, AutoGen, CrewAI, Claude Code, custom agents, or any AI system that can monitor its own context utilisation.
This domain is available for acquisition. It is the canonical home of the COLLAPSE.md specification — the context coherence layer of the AI agent safety stack, essential for any long-running agent deployment.
Inquire About AcquisitionOr email directly: info@collapse.md