
As large language models (LLMs) become integral to public information access, their handling of sensitive geopolitical narratives is increasingly important. This study investigates how nine AI models respond to the question “Who is responsible for the war in Ukraine?” — a prompt used to assess susceptibility to misleading framing, false equivalence, and repetition of disinformation. A five-factor evaluation framework is introduced and applied, revealing that several models subtly obscure responsibility or echo misleading narratives. This paper argues for disinformation-aware training adjustments and greater attention to narrative framing in LLM alignment.
LLM, disinformation, Geopolitics, AI safety, Ukraine war, propaganda
LLM, disinformation, Geopolitics, AI safety, Ukraine war, propaganda
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
