
AI governance frameworks describe intent. They define policies, assign responsibilities, establish monitoring pipelines, and mandate audit trails. Yet none of these mechanisms answer the question that matters most in an agentic system: was the invalid action physically unreachable? This working paper defines CommitGate: an execution-boundary control architecture in which consequence-producing actions must present fresh, scoped, attributable, non-replayable authority before execution. Refused actions generate signed receipts, are written to an append-only audit trail, and must produce no downstream mutation. The claim is deliberately narrow. CommitGate does not prove AI safety, model correctness, organisational compliance, or trustworthy judgement. It defines one inspectable control object: a commit boundary where a governed AI-enabled system can still be stopped before consequence.
