Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
versions View all 3 versions
addClaim

Modular Isomorphism in Artificial Intelligence: From the Ring Z/6Z to Shared-Nothing Architecture NPUs

Authors: Peinador Sala, Jose Ignacio;

Modular Isomorphism in Artificial Intelligence: From the Ring Z/6Z to Shared-Nothing Architecture NPUs

Abstract

Modular Isomorphism in Artificial Intelligence: From the Ring ℤ/6ℤ to Shared-Nothing Architecture NPUs Abstract The scalability of Deep Learning models currently faces physical limits within the monolithic Von Neumann architecture, where the energy cost of data movement exceeds computation. This work proposes a solution based on Modular Isomorphism under the ring ℤ/6ℤ, allowing the decomposition of dense neural networks into a hexagonal ensemble of six independent sub-networks. We experimentally validate this approach on MNIST (97.03% accuracy) and Transformers (94.75% validation), demonstrating that the Shared-Nothing architecture maintains competitive performance while eliminating the need for low-latency interconnects. A Monte Carlo robustness analysis (N=10) confirms the statistical significance (p < 0.012) of the generalization gap reduction. Economic analysis reveals 18× cost reductions via node arbitrage, utilizing 28nm technology versus 3nm. These results lay the foundation for a new generation of modular NPUs based on low-cost chiplets, democratizing access to high-performance computing. Key Contributions Mathematical Formalization: Definition of the Stride-6 tensor operator, establishing an isomorphism between modular convolution and matrix multiplication. Hex-Ensemble Architecture: Design of a distributed neural network where 6 "blind" workers recover accuracy through vote aggregation without cache coherence. Inverse Generalization Gap: Empirical discovery that "partial blindness" acts as a structural regularizer, outperforming dense models in validation scenarios (Modular Transformer +24.37% gap improvement vs Standard). Economic Feasibility: Demonstration that 28nm chiplets can compete with monolithic 3nm nodes by leveraging high yields and eliminating the "Reticle Limit." Included Artifacts 📄 Article (PDF): Full manuscript detailing the theoretical framework, mathematical proofs, and economic models. 💻 Source Code (Jupyter/Python): Complete reproduction environment including Tensor Isomorphism Validation (Error < 1e-5), Hex-Ensemble training, and Monte Carlo Statistical Robustness Analysis. 📝 LaTeX Source: Complete source files for the manuscript. Context & Preceding Work This architecture is the third evolution of the Modular Spectrum Theory. It builds upon previous algorithmic validation where the Shared-Nothing paradigm was used to compute 100 Million digits of π with 95% parallel efficiency (see: 10.5281/zenodo.18455954). Licensing The source code associated with this research is distributed under the PolyForm Noncommercial License 1.0.0 to foster open science while protecting independent innovation. GitHub Repository: https://github.com/NachoPeinador/Isomorfismo-Modular-Z-6Z-en-Inteligencia-Artificial

Keywords

Artificial intelligence, Modular Isomorphism, Z/6Z Ring, Green AI, Deep learning, Computer hardware, NPU, Energy efficiency, Machine learning, Inverse Generalization Gap, Sustainable economy, Neural Networks, Computer, Shared-Nothing Architecture, Chiplets, Sustainable Computing, Modular Neural Networks

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Green
Related to Research communities