
Large Language Models (LLMs) based on Transformers demonstrate broad competence but fall short of Artificial General Intelligence (AGI): they lack persistent internal state, grounded world models, robust long-horizon planning, causal understanding via intervention, reliable continual learning, and compute-rational meta-control. This paper proposes AGI---a specific, end-to-end trainable agent architecture (named literally "AGI") that operationalizes these missing capabilities. AGI is not a "tool wrapper" around an LLM; instead it is a closed-loop cognitive system with (i) a multi-scale consensus object-centric perceptual front-end resilient to adversarial and out-of-distribution noise, (ii) a persistent latent state core (SSM/RNN-like) that runs continuously, (iii) an explicit causal world model whose structure is discovered via hierarchical amortized causal discovery that scales to rich environments, (iv) a three-scale memory substrate with information-theoretic bounded management and tiered consolidation that guarantees stable lifelong learning under finite storage, (v) a hierarchical planner/executive separated from language and hardened against misgeneralization and deceptive planning through constraint-verified transparent planning with causal invariance testing, (vi) a skill compiler that converts solved problems into executable, inspectable, tested programs, and (vii) a meta-learner that allocates compute and updates fast weights for rapid adaptation. We additionally introduce a Procedural Generality Testing Framework (PGTF) that replaces static benchmarks with procedurally generated, compositionally controlled task distributions for rigorous evaluation of general competence. We specify module interfaces, dataflow, training objectives, and an implementation-level control loop. The result is a complete blueprint for an AGI-oriented system whose primary competency is interactive generalization under constraints, not next-token prediction.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
