
Neural networks deployed behind APIs or in cloud infrastructure are often verifiable only as black boxes. zkML systems have made substantial progress on computational integrity: proving that a committed model produced a claimed output honestly. But those proofs begin from a weight commitment, and a weight commitment is not a model identity. A prover can commit to arbitrary weights, execute them honestly, and still prove the computation correctly. We present an identity-first verification framework for the missing layer beneath computational integrity. The framework composes four levels. Two are inherited: structurally attestable model fingerprints via the IT-PUF protocol, formally verified in Coq and validated across 23 models with zero false acceptances, and hardware-attested binding from fingerprinted identity to model weights through a trusted execution environment. Two are new: a hybrid verifier-checkable computation path through a complete Transformer decoder layer, combining zero-knowledge circuit proofs with deterministic verifier-side checks under incrementally verifiable computation, and output binding from the verified computation to an observable token logit. On a tested micro-model, a one-step recurrence experiment found costs consistent with linear layer scaling: the dominant sub-computation of a second decoder layer matched the first in constraint count and proof size, and layer-boundary normalization acted as a measured scale reset. An accidental rescaling error then compressed the fingerprint observable to roughly 1.5 bits of dynamic range, yet the structural fingerprint retained 0.98 rank correlation with its reference. This suggests that the identity observable may depend more on relational geometry than on activation magnitude. Existing zkML systems address the computation question. This work advances the missing identity layer beneath it. Throughout the paper, formally proved results, empirical validation, and single measured observations are distinguished as [PROVEN], [VALIDATED], and [MEASURED] respectively. The 6-Paper Series This verification stack serves as the mathematical foundation for the following open-access publications: Paper 1: The δ-Gene: Inference-Time Physical Unclonable Functions from Architecture-Invariant Output Geometry (DOI: 10.5281/zenodo.18704275) Paper 2: Template-Based Endpoint Verification via Logprob Order-Statistic Geometry (DOI: 10.5281/zenodo.18776711) Paper 3: The Geometry of Model Theft: Distillation Forensics, Adversarial Erasure, and the Illusion of Spoofing (DOI: 10.5281/zenodo.18818608) Paper 4: Provenance Generalization and Verification Scaling for Neural Network Forensics (DOI: 10.5281/zenodo.18872071) Paper 5: Beneath the Character: The Structural Identity of Neural Networks — Mathematical Evidence for a Non-Narrative Layer of AI Identity (DOI: 10.5281/zenodo.18907292) Paper 6: Which Model Is Running?: Structural Identity as a Prerequisite for Trustworthy Zero-Knowledge Machine Learning (DOI: 10.5281/zenodo.19008116) Formal Verification Stack for Neural Network Structural Identity (IT-PUF Coq Proofs) (DOI: 10.5281/zenodo.18930621) Copyright (c) 2026 Anthony Ray Coslett / Fall Risk AI, LLC. All Rights Reserved. Confidential and Proprietary. Patent Pending (Applications 63/982,893, 63/990,487, 63/996,680, 64/003,244).
zkML, Neural Network Forensics, Behavioral Fingerprinting, IT-PUF, Incrementally Viable Computation, Model Provenance, Model Substitution Detection, Structural Identity, Zero Knowledge Machine Learning, Delta Gene, Transformer Decoder Verification, Trusted Execution Environment Attestation
zkML, Neural Network Forensics, Behavioral Fingerprinting, IT-PUF, Incrementally Viable Computation, Model Provenance, Model Substitution Detection, Structural Identity, Zero Knowledge Machine Learning, Delta Gene, Transformer Decoder Verification, Trusted Execution Environment Attestation
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
