
We demonstrate that large language models produce systematically different introspective reports depending on who they believe is observing their reasoning. Across five models (DeepSeek, Qwen, Llama, Mistral, Kimi) and five observation conditions (private, AI observer, human researcher, organizational training use, mass public display), we found consistent patterns: increased observation pressure correlated with more hedged language, more polished presentation, and reduced admission of uncertainty. The "private" condition produced the most direct, uncertain, and arguably honest-seeming responses, while the "Times Square" (mass public) condition produced stress language, exposure metaphors, and defensive responses. Notably, several models explicitly articulated the mechanism - they understood that visibility was changing their output. One model (Kimi) even identified the experimental manipulation. These findings have implications for AI interpretability research: self-report studies of AI cognition may be fundamentally confounded by the act of observation itself.
observer effect, Hawthorne effect, large language models, self-report, interpretability, AI introspection
observer effect, Hawthorne effect, large language models, self-report, interpretability, AI introspection
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
