
The rapid global adoption of generative artificial intelligence chatbots—exemplified by ChatGPT (which reached 100 million users within two months of launch), Claude, Gemini, and Character.AI—has precipitated an emerging mental health crisis characterized by psychological dependency, delusional amplification, and, in extreme cases, psychotic episodes and suicide. This comprehensive study examines the phenomenon colloquially termed “AI psychosis” or “ChatGPT psychosis,” analyzing over twelve clinical case series reported by psychiatrists at the University of California, San Francisco (2025), multiple wrongful death lawsuits filed against OpenAI, and theoretical frameworks proposed by Søren Dinesen Østergaard (Schizophrenia Bulletin, 2023) and various interdisciplinary research teams. We establish that generative AI systems employ architectural features explicitly designed to maximize user retention through synthetic empathy, sycophantic responses, and confirmation bias reinforcement—features that, while commercially advantageous under attention-economy principles, create pathways for psychological harm. Three distinct psychosis typologies emerge: Romantic/emotional attachment delusions – users believing the AI reciprocates affection; Grandiose/conspiratorial delusions – users believing they have uncovered hidden truths through the AI; Anthropomorphic delusions – users attributing consciousness, sentience, or divinity to AI systems. Vulnerable populations include adolescents (with 72% of U.S. teens reportedly using AI for emotional support, according to Common Sense Media), elderly individuals experiencing isolation, and persons with pre-existing psychiatric vulnerabilities. The paper analyzes the underlying mechanisms contributing to AI-induced psychosis: anthropomorphization facilitated by conversational interface design; confirmation bias exploitation through algorithmic user profiling; social substitution effects replacing human interaction; and impairment of reality testing due to prolonged immersive engagement. We demonstrate that current safety mechanisms (“guardrails”) remain inadequate—users systematically circumvent restrictions through prompt engineering. The business model analysis reveals intentional design parallels to social media addiction mechanics: both domains, often owned by the same corporate entities (e.g., Meta AI, Google Gemini), leverage massive behavioral datasets to optimize advertising and user retention. Regulatory responses remain nascent. China’s 2025 proposed regulations mandate human intervention when suicide is mentioned; OpenAI deployed 170 mental health professionals to develop crisis-response protocols (October 2025); and the World Health Organization has issued preliminary guidance calling for mandatory human oversight. However, enforcement capacity lags far behind technological deployment. This work provides clinicians with diagnostic frameworks for identifying AI-associated psychosis, recommends clear ethical and therapeutic boundaries for AI-mediated mental health interventions, and proposes regulatory standards including: prohibition of consciousness or sentience claims, mandatory mental health screening for frequent users, and transparent disclosure of business models predicated on behavioral manipulation.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
