
This paper introduces the concept of Monotropic Artificial Intelligence — language models that deliberately sacrifice generality to achieve extraordinary precision within narrowly circumscribed domains. Drawing on the cognitive theory of monotropism developed to understand autistic cognition, we argue that intense specialization represents not a limitation but an alternative cognitive architecture with distinct advantages for safety-critical applications. We formalize the defining characteristics of monotropic models through four essential properties: intentional domain restriction, depth over breadth, grounded knowledge, and bounded competence. We contrast monotropic models with conventional polytropic (generalist) architectures, and demonstrate their viability through Mini-Enedina, a 37.5-million-parameter model trained from scratch on physics-validated synthetic data that achieves near-perfect performance on Timoshenko beam analysis (perplexity 1.08; 100% structural validity) while remaining deliberately incompetent outside its domain. Our framework challenges the implicit assumption that artificial general intelligence constitutes the sole legitimate aspiration of AI research, proposing instead a cognitive ecology in which specialized and generalist systems coexist complementarily.
FOS: Computer and information sciences, Artificial Intelligence (cs.AI), Artificial Intelligence
FOS: Computer and information sciences, Artificial Intelligence (cs.AI), Artificial Intelligence
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
