
This paper presents DILLO (Decision Intelligence & Logic Layer Orchestrator), a conceptual governance architecture that introduces an explicit decision authorization layer between probabilistic inference and system execution in artificial intelligence systems. The work identifies authority inversion as a dominant failure mode in contemporary AI, wherein predictive models are implicitly granted operational authority without admissibility checks. DILLO enforces architectural separation of intelligence, decision, and execution planes, enabling deterministic constraint evaluation, state-aware authorization, and explicit refusal or non-decision outcomes. The proposed architecture is model-agnostic and invariant across software, hardware-assisted, and multi-model deployments. The paper reframes AI safety, reliability, and accountability as system-level governance problems rather than deficiencies in model accuracy or alignment techniques.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
