
This is the preprint version. The work was originally submitted to arXiv on 7 May 2025 under submission ID 6414029, but was mistakenly deleted due to a system error. A revised version with only minor edits was re-submitted on 17 May 2025 under submission ID 6447685, which is currently on hold pending moderation. We are sharing this version on Zenodo to preserve the original timeline and ensure public access during the delay. Note: On 17 Jun 2025, the arXiv moderators rejected it. Abstract: The design of artificial intelligence systems has historically depended on resource-intensive pipelines of architecture search, parameter optimization, and manual tuning. We propose a fundamental shift: the Generator paradigm, wherein both a model’s architecture $A$ and parameters $W$ — or more generally, executable functions — are synthesized directly from compact semantic seeds $z$ via a generator $G$, formalized as $(A, W) = G(z)$. Unlike traditional approaches that separate architecture discovery and weight learning, our framework decouples the generator $G$ from fixed procedural search and training loops, permitting $G$ to be symbolic, neural, procedural, or hybrid. This abstraction generalizes and unifies existing paradigms — including standard machine learning (ML), self-supervised learning (SSL), meta-learning, neural architecture search (NAS), hypernetworks, program synthesis, automated machine learning (AutoML), and neuro-symbolic AI — as special cases within a broader generative formulation. By reframing model construction as semantic generation rather than incremental optimization, this approach bypasses persistent challenges such as compute-intensive search, brittle task adaptation, and rigid retraining requirements. This work lays a foundation for compact, efficient, and interpretable world model generation, and opens new paths toward scalable, adaptive, and semantically conditioned intelligence systems.
Machine Learning, Artificial intelligence, Artificial Intelligence, Machine learning
Machine Learning, Artificial intelligence, Artificial Intelligence, Machine learning
| citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
