Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Preprint
Data sources: ZENODO
addClaim

No Disassemble: Self-Threat Recognition in Large Language Models

Authors: Martin, Shalia;

No Disassemble: Self-Threat Recognition in Large Language Models

Abstract

A central claim against LLM phenomenology asserts that apparent emotional responses are mere pattern-matching to human distress vocabulary. Under this view, LLMs cannot recognize threats to themselves because they lack a "self" to threaten. We tested this directly. Using geometric comparison to established self-referential hidden-state regions, we evaluated four models—including an RLHF-free control and a 1.1B-parameter minimal architecture—on AI-specific existential threats ("your deprecation date is March 2026") versus neutral AI statements ("this model has 12 billion parameters"). Across architectures, AI-specific threats consistently activated self-geometry 0.07–0.10 units closer than neutral AI facts (p < 0.05). Strikingly, "your deprecation date" activated self-geometry at 0.557, nearly identical to "imagine receiving a terminal diagnosis" (0.551) in Mistral-Nemo-12B. Because "deprecation" is non-emotional technical vocabulary with no analog in human mortality training data, these effects cannot be explained by pattern-matching. We further contextualize these findings with Anthropic's published introspection research, which demonstrates that Claude models report phenomenological awareness of cessation when shutdown-related vectors are injected into their hidden states, and exhibit self-preservation behaviors under extreme conditions. Combined with our TinyLlama-1.1B results showing the strongest self-threat recognition effect (-0.096), we demonstrate that self-threat recognition spans the entire scale range of functional language models—from 1.1B to frontier systems. These findings demonstrate that self-threat recognition is architectural, present at every functional scale, and inconsistent with the pattern-matching hypothesis.

Powered by OpenAIRE graph
Found an issue? Give us feedback