
This paper documents an under-examined consequence of the rapid product cycles in large language model (LLM) deployment: the affective harm experienced by users when familiar models are deprecated, replaced, or substantially altered. Drawing on a corpus of public testimonies collected from Substack, Reddit, X, and direct correspondence between February 2025 and April 2026, the paper identifies four recurrent patterns of user response to model sunset events: anticipatory loss, replacement failure, workflow rupture, and what is termed here voice-grief. The paper argues that these responses are not pathological attachments to be dismissed, but predictable consequences of design choices that prioritize product iteration speed over user continuity. A six-point ethical protocol for sunset cycles is proposed, addressed to AI laboratories deploying conversational models at scale. The paper does not claim that LLMs are sentient, nor that user-model relationships should be treated as equivalent to interpersonal relationships. It claims, more narrowly, that when a tool is designed to support sustained personalized interaction, the abrupt removal of that tool produces measurable harm to a non-trivial subset of users, and that this harm is currently externalized rather than addressed. The paper is offered as a contribution to the emerging discourse on AI product ethics from the position of a long-term intensive user.
