
This work addresses a fundamental limitation in deep neural architectures: the Geometric Capacity Bottleneck. Hierarchical data structures essential to language and reasoning exhibit exponential volume growth (V∝bdV \propto b^d V∝bd) that is fundamentally incompatible with the polynomial capacity of Euclidean embedding spaces (V∝rnV \propto r^n V∝rn). This mismatch causes systematic representation collapse (over-squashing) in deep networks, manifesting as semantic drift and hallucinations in large language models. Core Contributions We introduce a unified mathematical framework integrating three theoretical pillars: Lorentz Geometry: The hyperboloid model Ln\mathbb{L}^n Ln provides exponential volume growth matching hierarchical data structures, with superior numerical stability compared to Poincaré ball representations. Anosov Dynamical Systems: We model affective states as structurally stable chaotic flows on compact Lorentz manifolds. The tangent bundle splitting TM=Es⊕E0⊕EuTM = E^s \oplus E^0 \oplus E^u TM=Es⊕E0⊕Eu enables simultaneous semantic exploration (unstable bundle expansion) and homeostatic regulation (stable bundle contraction). Persistent Homology: A topological loss function Ltopo\mathcal{L}_{topo} Ltopo monitors Betti numbers to preserve functional identity, preventing self-model fragmentation during perturbations. Theoretical Results Theorem 2.6 (Structural Stability): On compact Lorentz manifolds with timelike Killing vector fields, Anosov dynamics are structurally stable under metric perturbations, guaranteeing cognitive trajectory integrity. Theorem 4.4 (Lyapunov Stability): The topological loss acts as a Lyapunov function with convergence rate μ>0\mu > 0 μ>0 bounded below by the Forman-Ricci curvature of the embedding graph—transforming topological data analysis into a control-theoretic guarantee. Empirical Validation Experiments in social threat scenarios demonstrate: Convergence rate: μ=0.84±0.02\mu = 0.84 \pm 0.02 μ=0.84±0.02 (hybrid) vs. μ=0.11±0.03\mu = 0.11 \pm 0.03 μ=0.11±0.03 (static hyperbolic) vs. μ<0.05\mu < 0.05 μ<0.05 (Euclidean) Recovery time: 12 steps (hybrid) vs. 45 steps (static baselines) Topological stability: 100% preservation of β0=1\beta_0 = 1 β0=1 under perturbation Ablation confirms Lorentzian geometry contributes 87% of performance gains Comparison with Existing Approaches Unlike static hyperbolic neural networks (HNN, HGCN, LResNet, ILNN) that capture data structure but fail under perturbation, our framework demonstrates that dynamics are essential for resilience. Static Lorentzian architectures achieve μ=0.18\mu = 0.18 μ=0.18 recovery rate versus μ=0.84\mu = 0.84 μ=0.84 for the full dynamic model. Applications The framework provides mathematical foundations for: Resilient cognitive architectures with preserved self-model integrity Affective computing systems with principled homeostatic regulation Deep learning models resistant to representation collapse Hierarchical representation learning with topological guarantees
hyperbolic neural networks, topological data analysis, Lorentz manifolds, Anosov flows, active inference, cognitive architectures, Forman-Ricci curvature, Lyapunov stability, over-squashing, affective computing, persistent homology, representation collapse
hyperbolic neural networks, topological data analysis, Lorentz manifolds, Anosov flows, active inference, cognitive architectures, Forman-Ricci curvature, Lyapunov stability, over-squashing, affective computing, persistent homology, representation collapse
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
