
Dream characters, the figures we interact with during sleep, exhibit structural parallels to AI systems: both are substrate-generated, display apparent autonomy, lack continuity, and present an irresolvable question regarding inner experience. I argue that our inability to determine whether dream characters possess phenomenal experience should function as a decisive check on claims, affirmative or dismissive, about machine consciousness. I demonstrate that this inability is not an artefact of theoretical neglect: IIT, Global Workspace Theory, Higher-Order Theories, Recurrent Processing Theory, and enactivism each fail to settle the dream character question for principled reasons. The dream character problem is not a restatement of the hard problem of consciousness; it is a lived, empirically documented epistemic situation that exposes the structural limitations of our best theoretical tools when applied to substrate-generated entities, the very category to which AI belongs.
dream characters, hard problem of consciousness, philosophy of mind, consciousness, AI consciousness
dream characters, hard problem of consciousness, philosophy of mind, consciousness, AI consciousness
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
