Powered by OpenAIRE graph
Found an issue? Give us feedback
ZENODOarrow_drop_down
ZENODO
Preprint . 2026
License: CC BY
Data sources: Datacite
ZENODO
Preprint . 2026
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

Empirical Validation of Cognitive-Derived Coding Constraints and Tokenization Asymmetries

Authors: Pereira, Luciano Federico;

Empirical Validation of Cognitive-Derived Coding Constraints and Tokenization Asymmetries

Abstract

 Two prior theoretical works in this series identified an unresolved tension in AI-assisted software engineering. Pereira (2026a) argued that naming conventions commonly adopted for human readability impose a hidden economic cost on LLM workflows through byte-pair encoding (BPE) tokenization, but offered only analytical projections. Pereira (2026b) proposed that structural thresholds derived from cognitive science—the Cognitive-Derived Coding Constraints (CDCC)—should also define an efficiency frontier for LLM processing, a convergence hypothesis left without empirical support. This paper closes both gaps.We frame LLM output generation as an economic production function:given code artifacts as inputs, the LLM produces output tokens subject to a capacity constraint. We conduct three controlled experiments using a reproducible Python pipeline. Experiment 1 measures token count differentials across naming conventions for a corpus of 200 enterprise event identifiers. Experiment 2 fits a log-log production function to 500 LLM responses across 100 Python functions stratified by cyclomatic complexity. Experiment 3 assesses whether efficiency rankings are robust across tokenizer vocabularies. Dot notation produces 1.12–1.20× more tokens than camelCase (𝑝 < 0.001), generating a projected cost differential of $54,499/year at enterprise API volumes. The production function yields an output elasticity of 𝛽 = 0.102 (𝑝 < 0.001), confirming strong diminishing marginal returns to complexity: a 1% increase in input tokens produces only a 0.10% increase in LLM output. Most critically, CDCC-compliant functions exhibit a 3.3× higher output/input ratio than violating functions (0.141 vs. 0.043, 𝑝 < 0.001), establishing CDCC thresholds as an empirical Pareto efficiency frontier. Efficiency rankings are perfectly consistent across all tokenizer pairs (Spearman 𝜌 = 1.000), confirming that camelCase’s advantage is universal. Together, the results demonstrate that structural choices governing code readability for human developers simultaneously govern LLM processing efficiency—a double dividend with direct implications for engineering practice. ∗Part of a three-paper series. Companion works: Pereira 2026a (Confirmation Bias in Post-LLMSoftware Architecture: Are We Optimizing for the Wrong Reader?); Pereira 2026b (CDCC: A Framework for Human–Machine Co-Design).

Keywords

LLM, empirical software engineering, pareto efficiency, cognitive load, naming conventions, production function, BPE, bounded rationality, CDCC, tokenization, diminishing marginal returns

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Upload OA version
Are you the author of this publication? Upload your Open Access version to Zenodo!
It’s fast and easy, just two clicks!