
Loss of control, behavioral drift, and non-auditability in AI-assisted software development are commonly attributed to model misalignment, hallucination, or insufficient guardrails. This paper argues that such diagnoses overlook a fundamental category distinction. We distinguish between model alignment boundaries, established at training time through data distributions, RLHF, and safety fine-tuning, and task execution boundaries, which must be explicitly constructed at execution time for a specific engineering task. While the former provides general, statistical safety tendencies, it does not—and cannot—automatically inherit the concrete, task-specific constraints required for engineering governance. We show that many widely reported failures, including insecure yet functional code generation, arise not from deficient model alignment but from the absence of a decidable task execution boundary at runtime. When such a boundary is missing, drift and violation become epistemically undecidable, and model preferences fill the resulting vacuum. We formalize task execution boundaries as the resolution of visible scope and explicit prohibitive constraints, introduce boundary evidence as the minimal auditable unit, and demonstrate through engineering scenarios that governance mechanisms operating without this primitive rest on interpretive rather than decidable foundations.
Decision Provenance, Prompt Engineering, AI Governance, Technical Accountability, Task-Specific Constraints, Category Error, AI-Assisted Software Development, LLM Agents, Insecure Code Generation, Behavioral Drift, AI Accountability, Prohibitive Constraints, Code Generation, Boundary Evidence, Software Engineering Governance, Auditability, Execution-Time Constraints, AI Safety, Model Alignment, Decidable Governance, Runtime Governance, Task Execution Boundary
Decision Provenance, Prompt Engineering, AI Governance, Technical Accountability, Task-Specific Constraints, Category Error, AI-Assisted Software Development, LLM Agents, Insecure Code Generation, Behavioral Drift, AI Accountability, Prohibitive Constraints, Code Generation, Boundary Evidence, Software Engineering Governance, Auditability, Execution-Time Constraints, AI Safety, Model Alignment, Decidable Governance, Runtime Governance, Task Execution Boundary
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
