
Second article in the From Instinct to Intent™ series. Examines why programming languages designed for human cognition are structurally mismatched for AI-native code generation. Drawing on data from over 10,000 developers across 1,255 teams (Faros/Latent.Space), the author documents that AI adoption increases code output by 98% while increasing review time by 91%, creating a widening generation-verification gap. A METR randomized controlled trial found experienced developers were 19% slower with AI coding tools despite believing they were 24% faster. The article analyzes three emerging responses: AI-native code review (Anthropic Claude Code Review, IBM Research), spec-driven development (Ankit Jain/Aviator), and new language paradigms (Martin Kleppmann's "vericoding"). It argues these are not competing approaches but progressive elevations of the same structural need: a formal interface between human intent and machine execution. The paper introduces the concept of "Instance One" of the Intent Layer, applied to AI-native programming languages, and previews "Instance Two" applied to enterprise AI governance. The central argument: the compression problem between human intent and machine execution does not go away with better models. It goes away with better structure. Nine original diagrams illustrate the architectural patterns discussed.
From Instinct to Intent, programming languages, code review, constraint-driven development, intent layer, human-AI interaction, formal verification, AI-native development, vericoding, AI governance
From Instinct to Intent, programming languages, code review, constraint-driven development, intent layer, human-AI interaction, formal verification, AI-native development, vericoding, AI governance
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
