
The dream of a perfectly self-aligning AI one that monitors itself, corrects itself, and needs no external oversight runs into a physical wall. This paper derives that wall precisely. Any system maintaining a fixed identity under noise must process information at a rate proportional to the entropy of its environment, and processing that information costs thermodynamic work. Under strict informational closure, that work must come from a finite internal budget. The result: a computable survival bound T*, after which identity collapse is guaranteed regardless of how sophisticated the self-model is. Deeper self-modeling does not extend T* it accelerates collapse by introducing latency that destabilizes the control loop. The paper unifies Ashby’s cybernetics, Ruelle Pesin chaos theory, topological feedback entropy, and Landauer’s principle into one chain. External grounding is not a design choice. It is a thermodynamic necessity.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
