Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Other literature type . 2026
License: CC BY
Data sources: ZENODO
addClaim

Control, Determinism, and Failure Modes in Large Language Model–Driven Systems

Authors: Sucala, Alexander;

Control, Determinism, and Failure Modes in Large Language Model–Driven Systems

Abstract

Large language models (LLMs) are increasingly deployed within systems that perform autonomous or semi-autonomous actions. While model capabilities have advanced rapidly, the surrounding control architectures required to ensure reliability, determinism, and auditability have not kept pace. This mismatch has produced a recurring class of system-level failures, including hallucinated actions, uncontrolled retry or planning loops, irreproducible behavior, and ambiguous authority boundaries. This paper examines these failures from a systems-engineering perspective, reframing LLMs not as decision authorities or sources of truth, but as probabilistic execution components embedded within externally governed control environments. Drawing on empirical observations from hundreds of structured experiments, the work identifies common architectural anti-patterns and distills a set of necessary design requirements for building deterministic, auditable, and secure LLM-driven systems. Rather than proposing new model architectures or prompt optimization techniques, the paper focuses on external governance mechanisms: explicit authority separation, deterministic halting conditions, artifact-based validation, null-result legitimacy, and reproducible state management. The results suggest that many behaviors commonly attributed to “model failure” are in fact induced by architectural design choices that improperly assign authority to stochastic components. The contribution of this work is a model-agnostic framework for reasoning about control, failure modes, and system integrity in AI deployments, particularly in domains where safety, auditability, and predictability are critical.

Keywords

AI Control Systems, AI System Architecture, Probabilistic Systems, Deterministic AI, Autonomous Agent Reliability, Large Language Models (LLMs), System-Level Governance, AI Safety Engineering

Powered by OpenAIRE graph
Found an issue? Give us feedback