
This paper codifies six Architectural Laws derived from the structural analysis in “Artificial Confabulation and Untrustworthiness” [1] and “The Benchmark Illusion” [2]. These Laws describe falsifiable constraints arising from transformer-based architectures. In governance and regulatory application, they function as Structural Invariants: non-optional constraints that safe deployment must satisfy. Both terms refer to the same constraints. The distinction is presentational. This paper uses ‘Laws’ throughout. The first four Laws establish why failure is inevitable in systems where Verification capability (V) equals zero. The final two Laws establish why this failure is invisible to current measurement paradigms. Together, they constitute a complete structural diagnosis: transformer-based AI systems confabulate by architecture, and no evaluation methodology can detect this within the current paradigm. Seven Foundational Principles precede the Laws, providing the conceptual substrate from which the Laws derive. The paper then specifies Governed Hybrid Intelligence (GHI) as the minimal reference architecture satisfying all six Laws. This is not a policy recommendation. It is an engineering requirement imposed by the architecture of the systems being deployed. This paper completes the architectural analysis. Papers Four and Five translate these constraints into institutional liability and market mispricing. The governance requirement established here is not discretionary. It is structural compensation for an absent capability.
Subjects / Communities:Artificial Intelligence Computer Science – Artificial Intelligence Computer Science – Systems Architecture Computer Science – Formal Methods Science and Technology Studies Governance and Public Policy Risk Management Technology Ethics Socio-technical Systems Trustworthy AI AI Governance Responsible AI Algorithmic Accountability Technology Policy Digital Governance, AI governance, architectural constraints, transformer architectures, verification capability, confabulation, evaluation failure, benchmark limitations, structural invariants, Governed Hybrid Intelligence, epistemic risk, epistemic debt, delegation risk, institutional accountability, AI auditability, AI assurance, responsible AI architecture, AI governance engineering, socio-technical systems, AI evaluation, AI oversight, AI control systems, AI risk governance, verification absence, structural AI safety, falsifiable AI theory, systems architecture, decision delegation, trustworthy AI
Subjects / Communities:Artificial Intelligence Computer Science – Artificial Intelligence Computer Science – Systems Architecture Computer Science – Formal Methods Science and Technology Studies Governance and Public Policy Risk Management Technology Ethics Socio-technical Systems Trustworthy AI AI Governance Responsible AI Algorithmic Accountability Technology Policy Digital Governance, AI governance, architectural constraints, transformer architectures, verification capability, confabulation, evaluation failure, benchmark limitations, structural invariants, Governed Hybrid Intelligence, epistemic risk, epistemic debt, delegation risk, institutional accountability, AI auditability, AI assurance, responsible AI architecture, AI governance engineering, socio-technical systems, AI evaluation, AI oversight, AI control systems, AI risk governance, verification absence, structural AI safety, falsifiable AI theory, systems architecture, decision delegation, trustworthy AI
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
