
Abstract Recent MIT/Harvard evaluations of large language models (LLMs) reveal a stark discrepancy between conceptual knowledge and practical application, attributing failures to a lack of true understanding. This paper recaps and extends the idea that such failures stem not from LLM limitations but from incoherent world models used in training and testing. We propose equipping LLMs with Perspective Theory—a complete, self-generative ontology where existence is perpetual motion from the something/nothing paradox—as a variable to test adaptive intelligence. By replacing summed, incomplete models with this paradox-resolving framework, we hypothesize measured improvements in coherence and deduction across nuanced tasks. The paper outlines a protocol to replicate the MIT/Harvard test, demonstrating that world model incoherence, not AI "faking," is the root cause of low performance.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
