
Abstract The growing use of artificial intelligence (AI) in nuclear command, control, and communications (NC3), especially in spacebased security systems, marks a significant shift in how strategic decisions may be made. AI systems can improve information processing and operational speed, but they also create new risks that may undermine strategic stability. This study explores the ethical, technical, and strategic challenges that arise when nuclear decision-making relies on autonomous or semi-autonomous systems. Drawing on deterrence theory, research on AI safety, studies of human–machine interaction, and existing space law, the paper identifies three primary sources of risk: shortened decision timelines, ambiguous responsibility for decisions, and weaknesses in sensor reliability. To demonstrate how these factors interact, the study presents a stochastic escalation model showing how fast-paced machine interactions can raise the risk of unintended conflict in uncertain conditions. The paper concludes by outlining a HumanCentric Heuristic (HCH) governance model that emphasizes sustained human control while still supporting timely operational decisions.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
