Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Research
Data sources: ZENODO
addClaim

From Policy to Commit: Execution-Boundary Control for Governed AI Systems

Authors: Jones, Ricky Dean;

From Policy to Commit: Execution-Boundary Control for Governed AI Systems

Abstract

AI governance frameworks describe intent. They define policies, assign responsibilities, establish monitoring pipelines, and mandate audit trails. Yet none of these mechanisms answer the question that matters most in an agentic system: was the invalid action physically unreachable? This working paper defines CommitGate: an execution-boundary control architecture in which consequence-producing actions must present fresh, scoped, attributable, non-replayable authority before execution. Refused actions generate signed receipts, are written to an append-only audit trail, and must produce no downstream mutation. The claim is deliberately narrow. CommitGate does not prove AI safety, model correctness, organisational compliance, or trustworthy judgement. It defines one inspectable control object: a commit boundary where a governed AI-enabled system can still be stopped before consequence.

Powered by OpenAIRE graph
Found an issue? Give us feedback