
The development of artificial intelligence poses a novel governance challenge: can transformative technology be controlled before catastrophic failure demonstrates its necessity? From 1946 onwards, policymakers and institutional actors began confronting this challenge with nuclear weapons and explored governance mechanisms for dealing with existential technological risks. I focus on the years 1946 to 1970 and review this period for lessons applicable to contemporary AI regulation and find the following: Comprehensive prohibition schemes can attract broad support when confronting existential risks, but this support proves politically unworkable without verification mechanisms. International cooperation is likely to fragment across competing jurisdictions, and perhaps critically so. Corporate self-regulation may appear adequate during development, both generally and in particular within commercial contexts, but also proves systematically inadequate under deployment pressure. Verification mechanisms may play a decisive role through technical inspection regimes, but these require decades to establish. The integration of AI into nuclear command systems is likely to create unregulated convergence risks, yet no international framework addresses this. Overall, governance may look more like responding to near-catastrophe than proactive risk management. The IAEA inspection regime demonstrated that monitoring succeeds where the Baruch Plan's prohibition failed. Progress came incrementally through agreements accepting imperfection. Catastrophic proximity drove political will that diplomacy alone could not. Effective regulation may be achievable, but there are substantial obstacles to implementing it before disaster compels action.
Artificial intelligence, Artificial Intelligence/ethics, Oppenheimer, AI Oversight, AI WEAPONS, Artificial Intelligence/standards, International regulation, AI and Society, EU AI Act, AI governance, Global regulation, Artificial Intelligence/history, Artificial Intelligence, Artificial intelligence governance, AI, Technology ethics, Artificial Intelligence/classification, Regulators, AI regulation, Autonomous weapons, Nuclear weapons policy, AI standards
Artificial intelligence, Artificial Intelligence/ethics, Oppenheimer, AI Oversight, AI WEAPONS, Artificial Intelligence/standards, International regulation, AI and Society, EU AI Act, AI governance, Global regulation, Artificial Intelligence/history, Artificial Intelligence, Artificial intelligence governance, AI, Technology ethics, Artificial Intelligence/classification, Regulators, AI regulation, Autonomous weapons, Nuclear weapons policy, AI standards
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
