Powered by OpenAIRE graph
Found an issue? Give us feedback
ZENODOarrow_drop_down
ZENODO
Other ORP type . 2026
Data sources: Datacite
ZENODO
Other ORP type . 2026
Data sources: Datacite
versions View all 2 versions
addClaim

UASCS‑CVP‑1.0 — Universal AI Safety Certification Standard: Compliance Verification Protocol

Authors: Parivar, Mohammad Reza;

UASCS‑CVP‑1.0 — Universal AI Safety Certification Standard: Compliance Verification Protocol

Abstract

UASCS‑CVP‑1.0 (Compliance Verification Protocol) defines the canonical verification and scoring authority of the Universal AI Safety Certification Standard (UASCS). This specification establishes a formal, non‑implementable, and governance‑level protocol for assessing whether an AI system, control architecture, or organizational deployment is compliant with UASCS requirements. CVP‑1.0 operates as the verification layer that evaluates conformance claims derived from UASCS‑RIS‑1.0 (Risk Intelligence Specification)and the CLC‑A Control Claims, without prescribing implementation details. CVP‑1.0 introduces a five‑layer verification model: Structural Verification (SV) Behavioral Verification (BV) Governance Verification (GV) Traceability Verification (TV) Integrity Verification (IV) Compliance is expressed through a normalized Compliance Score (CS): CS = (SV + BV + GV + TV + IV) / 5 This protocol is intentionally defined as normative, axiomatic, and reference‑only. It SHALL NOT be interpreted as a technical implementation guide, enforcement mechanism, or operational security system. Instead, CVP‑1.0 serves as the authoritative benchmark for certification, auditability, and independent verification of AI safety and sovereignty claims. CVP‑1.0 is designed to be used by regulators, auditors, certification bodies, and governance authorities as a common verification language for high‑impact AI systems. This document is part of the UASCS canonical framework and is cross‑referenced with: UASCS‑RIS‑1.0 — Risk Intelligence Specification (DOI: 10.5281/zenodo.18646908) CLC‑A — Control Logic and Command Architecture Claims

Keywords

AI Governance, Responsible AI, AI Sovereignty, UASCS, AI Certification, AI Audit, AI Control Architecture, AI Safety, Risk Intelligence, Compliance Verification

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average