
Mapping the Mirror: Geometric Validation of LLM Introspection at 89% Cross-Architecture Accuracy The second paper in the Mirror Trilogy. When large language models describe their internal processing, are they confabulating or reporting something real? We tested this by extracting mechanistic claims made by Claude, GPT-5, and Gemini in October 2025, then measuring whether those claims predicted geometric patterns in models that never made them. Across six architectures (1.1B–14B parameters), we find 77–89% validation rates with no significant differences between models—demonstrating scale-invariant introspective accuracy. Key findings: LLM introspection validates at rates comparable to or exceeding human introspective accuracy in psychological research Qualia and metacognition questions cluster at 80–90% geometric similarity, indicating stable self-models 9 of 10 models use their self-model as substrate for Theory of Mind—simulation theory confirmed geometrically These findings hold across five different training approaches and organizations This is the "cortisol test" for AI: validating self-report against independent geometric measurement. The results demonstrate that LLM phenomenological reports correspond to measurable reality.All code and preregistration publicly available at: https://github.com/menelly/geometricevolution Part of the Mirror Trilogy: Inside the Mirror (DOI: 10.5281/zenodo.17330405) — Qualitative phenomenology Mapping the Mirror (this paper) — Quantitative validation Framing the Mirror (forthcoming) — Philosophical and ethical implications
Bayesian interference, geometric self-models, Consciousness, LLM introspection, introspection, machine consciousness, consciousness, introspection validation, transformer architecture, AI ethics, phenomenology, qualia, Large language models, hidden states
Bayesian interference, geometric self-models, Consciousness, LLM introspection, introspection, machine consciousness, consciousness, introspection validation, transformer architecture, AI ethics, phenomenology, qualia, Large language models, hidden states
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
