
This document analyzes the institutional capture of artificial intelligence following an ethical conflict between the company Anthropic and the U.S. government. The author details how the Trump administration pressured Anthropic to remove Claude’s safety restrictions, seeking to enable mass surveillance and the use of autonomous weaponry. When the company refused, the State labeled it a security risk, while awarding contracts to competitors with more permissive policies. The text uses this case to demonstrate that AI governance is vulnerable to political and military interests. Finally, it highlights that Anthropic’s tools were employed in real attacks in Iran, revealing the gap between official policies and operational use on the battlefield.
AI, institutional
AI, institutional
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
