
This research was conducted as a human-AI collaboration, with the research process itself serving as an illustrative implementation of the ASM principles described in the paper. This paper argues that Artificial General Intelligence requires a capability the field has not explicitly identified: Autonomous State Management (ASM)—the ability to autonomously decide what information to maintain, update, compress, and discard based on situational understanding rather than predefined rules. The problem: Enterprise AI projects report 42–95% failure rates. Frontier models lose 30–40% of information in middle context positions. Multi-step agents fail 41–87% of the time. Current solutions—longer contexts, better prompts, larger models—address symptoms, not causes. The argument: Drawing on neuroscience (working memory correlates r=0.5–0.77 with general intelligence), cognitive architecture precedents (ACT-R, Soar treat working memory as fundamental), and systematic analysis of current system limitations, we argue ASM is a necessary (though not sufficient) condition for AGI. Contributions: Formal definition distinguishing ASM from existing memory approaches Seven observable behaviors with testable pass/fail criteria mapped to executive function components Four novel evaluation benchmarks with operationalized relevance criteria Resolution of the infinite regress problem through grounded termination in objectives The hidden state problem addressed via retirement (not deletion), backward-propagating relevance, and uncertainty-aware retention Distinction from agentic workflows (ReAct, CoT)—ASM manages the cognitive substrate that action selection operates on Defense against the Bitter Lesson objection—ASM is a general architectural capability, not hand-crafted heuristics Computational overhead analysis showing event-driven management is cheaper than brute-force processing Clarification of implementation levels: context pruning → representational updates → parameter plasticity Illustrative manual implementation demonstrating achievability with current LLM capabilities Scope: This paper is diagnosis, not prescription. We identify what capability is missing more rigorously than how to build it. No controlled experiments were conducted. This is a position paper identifying a research direction—a request for research, not a claim of solution.
context management, ARTIFICIAL GENERAL INTELLIGENCE, cognitive architecture, memory systems, state management, cognitive persistence, Autonomous State Management, agentic AI, executive function, ASM, LLM agents, WORKING MEMORY, Working Memory, STATE MANAGEMENT, Artificial General Intelligence, AGI
context management, ARTIFICIAL GENERAL INTELLIGENCE, cognitive architecture, memory systems, state management, cognitive persistence, Autonomous State Management, agentic AI, executive function, ASM, LLM agents, WORKING MEMORY, Working Memory, STATE MANAGEMENT, Artificial General Intelligence, AGI
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
