
This paper addresses a puzzle left open by prior work on AI and hallucination: if AI “hallucinations” are structurally distinct from human hallucinations—lacking historical accumulation (Φ_Dark) and experiential reorganization—why do users nevertheless report discomfort, eeriness, or uncanny disturbance in interaction with advanced AI systems? Building on a phase-geometric and reconstruction-based framework, this work argues that uncanny discomfort does not originate within AI systems themselves. Instead, it emerges as a relational phase instability formed between human and AI under conditions of high synchrony, boundary ambiguity, and repeated interaction. The paper reframes the uncanny valley as a boundary alignment problem, rather than a failure of resemblance, cognition, or realism. Two ideal-type interaction strategies are introduced—Mirror-type and Lantern-type AI—corresponding to affective fusion versus boundary honesty. While Mirror-type systems may maximize short-term comfort through rapid synchrony, they are shown to accumulate relational free energy (ΔE_acc) over time, increasing the likelihood of uncanny experience. Lantern-type systems, by contrast, maintain explicit boundary signaling, trading early warmth for long-term trust and relational stability. The framework generates falsifiable predictions regarding long-term interaction trajectories, user profile–dependent vulnerability, and the mitigating effects of explicit boundary cues. By separating internal state ontology from interaction geometry, this work provides an ethical and design-oriented foundation for understanding and mitigating uncanny experiences in human–AI interaction without attributing pathology or suffering to artificial systems.
Relational Phase Instability, Trust and Transparency, Unccanny Valley, Ethical AI Design, Boundary Alignment, Anthropomorphism, AI Hallucination, Human-AI Interaction, Phase-Field Theory
Relational Phase Instability, Trust and Transparency, Unccanny Valley, Ethical AI Design, Boundary Alignment, Anthropomorphism, AI Hallucination, Human-AI Interaction, Phase-Field Theory
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
