
This work introduces the Holonomy Transformer (HoT), a neural architecture that embeds geometric consistency constraints directly into transformer computation. Tokens are represented as sections of a fiber bundle, and attention is computed via parallel transport with holonomy-based costs that structurally suppress inconsistent information flow. The architecture enforces reasoning consistency as a geometric property rather than a learned statistical regularity, using holonomy penalties, curvature-gated feedforward layers, and waypoint-based routing. A companion technical report describes extensions in which creativity and exploration are treated as cost-guided deviations within the learned geometric manifold. This submission presents the core architecture and theoretical framework. Empirical scaling and benchmarking are left to future work.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
