Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Other literature type . 2026
License: CC BY
Data sources: ZENODO
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Other literature type . 2026
License: CC BY
Data sources: ZENODO
ZENODO
Other ORP type . 2026
License: CC BY
Data sources: Datacite
ZENODO
Other ORP type . 2026
License: CC BY
Data sources: Datacite
ZENODO
Other ORP type . 2026
License: CC BY
Data sources: Datacite
versions View all 3 versions
addClaim

V = 0: Six Architectural Laws Mandating External Governance For AI

Authors: Devin, Andrew James;

V = 0: Six Architectural Laws Mandating External Governance For AI

Abstract

This paper codifies six Architectural Laws derived from the structural analysis in “Artificial Confabulation and Untrustworthiness” [1] and “The Benchmark Illusion” [2]. These Laws describe falsifiable constraints arising from transformer-based architectures. In governance and regulatory application, they function as Structural Invariants: non-optional constraints that safe deployment must satisfy. Both terms refer to the same constraints. The distinction is presentational. This paper uses ‘Laws’ throughout. The first four Laws establish why failure is inevitable in systems where Verification capability (V) equals zero. The final two Laws establish why this failure is invisible to current measurement paradigms. Together, they constitute a complete structural diagnosis: transformer-based AI systems confabulate by architecture, and no evaluation methodology can detect this within the current paradigm. Seven Foundational Principles precede the Laws, providing the conceptual substrate from which the Laws derive. The paper then specifies Governed Hybrid Intelligence (GHI) as the minimal reference architecture satisfying all six Laws. This is not a policy recommendation. It is an engineering requirement imposed by the architecture of the systems being deployed. This paper completes the architectural analysis. Papers Four and Five translate these constraints into institutional liability and market mispricing. The governance requirement established here is not discretionary. It is structural compensation for an absent capability.

Keywords

Subjects / Communities:Artificial Intelligence Computer Science – Artificial Intelligence Computer Science – Systems Architecture Computer Science – Formal Methods Science and Technology Studies Governance and Public Policy Risk Management Technology Ethics Socio-technical Systems Trustworthy AI AI Governance Responsible AI Algorithmic Accountability Technology Policy Digital Governance, AI governance, architectural constraints, transformer architectures, verification capability, confabulation, evaluation failure, benchmark limitations, structural invariants, Governed Hybrid Intelligence, epistemic risk, epistemic debt, delegation risk, institutional accountability, AI auditability, AI assurance, responsible AI architecture, AI governance engineering, socio-technical systems, AI evaluation, AI oversight, AI control systems, AI risk governance, verification absence, structural AI safety, falsifiable AI theory, systems architecture, decision delegation, trustworthy AI

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Green