
We extend the Operational Coherence Bound (OCB) and quantum OperationalCoherence Bound (qOCB) frameworks to artificial intelligence, formalizing the Neural Operational Coherence Bound (NOCB). We prove that neural networks are finiteobservers with bounded operational entropy capacity Amax = 4NLd log(e). Applying this constraint, we derive: (1) Hallucination is a formal OCB violation withan architectural lower bound; (2) Catastrophic forgetting rates are determined bythe training contraction coefficient κtrain = ηµ; (3) Softmax attention is the uniquePetz-optimal distinguishability mechanism for the qOCB metric, with head countdetermined by sparsity-corrected metric tessellation; (4) Chinchilla scaling law exponents α ≈ 0.34 and β ≈ 0.28 arise from the qOCB geometry of the loss landscapeand intrinsic language dimension. All results are grounded in the foundational OCBframework (1) and the qOCB extension (2), and provide testable predictions formodel architecture and training dynamics.
Artificial intelligence, Artificial Intelligence, Physics
Artificial intelligence, Artificial Intelligence, Physics
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
