Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Preprint
Data sources: ZENODO
addClaim

The Entropy Limit of Generative Intelligence

Authors: Troche, Elias;

The Entropy Limit of Generative Intelligence

Abstract

Large language models exhibit a range of degradation phenomena — hallucination, catastrophic forgetting, lost in the middle, alignment drift — that the industry treats as separate problems requiring separate solutions. We propose that these phenomena are unified manifestations of a single underlying process: informational error accumulation in high-complexity, low-decoupling systems operating without active correction mechanisms. We introduce the Informational Persistence Model (IPM), a constraint framework derived from the thermodynamics of information processing, and operationalize it for AI systems through four measurable parameters: metabolic intensity (η), informational complexity (K), error-correction fidelity (c), and structural decoupling (S). The IPM predicts that model degradation is constrained by a deterministic boundary determined by the persistence coefficient Φ = (c·S)/(η·K). We validate the framework retrospectively against published degradation data from five established studies: TruthfulQA (hallucination scaling), Liu et al. 2023 (lost in the middle), Kirkpatrick et al. 2017 (catastrophic forgetting), Anthropic sleeper agents (alignment drift), and Mixtral MoE comparisons. We calculate Φ for two representative architectures — Llama-3-70B (dense transformer) and DeepSeek-V3 (Mixture of Experts) — demonstrating that the MoE architecture achieves significantly higher persistence due to its structural decoupling (S = 0.65 vs 0.15). We propose five falsifiable predictions and five immediate applications — from production monitoring to architectural licensing — that any organization can implement with existing tools. We conclude that current scaling practices — increasing K and η without proportional increases in c and S — are thermodynamically unsustainable. The useful lifespan of generative models is not a matter of engineering failure but of physical constraint.

Powered by OpenAIRE graph
Found an issue? Give us feedback