
We extend the Governance Physics framework from output-level attestation to attention-level computational provenance. By recognizing the structural isomorphism between the softmax normalization constraint (Σⱼ attentionᵢⱼ = 1) and the trust conservation law (Σᵢ ωᵢ = K), we demonstrate that transformer attention mechanisms already encode the mathematical structure required for governance verification. We introduce attention fingerprinting—cryptographically hashing attention matrices at each layer into Merkle chains—enabling tamper-evident provenance of the computational path that produced a decision. Experimental evaluation across GPT-2 variants (124M–1.5B parameters) demonstrates 8–28% latency overhead with 100% conservation verification accuracy. This is Paper VI in the Governance Physics series.
attention mechanisms, computational provenance, mechanistic interpretability, Merkle trees, trust conservation, Byzantine fault tolerance, AI governance
attention mechanisms, computational provenance, mechanistic interpretability, Merkle trees, trust conservation, Byzantine fault tolerance, AI governance
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
