
This work proposes a unified multi-layer causal framework connecting the geometric structure of large language model (LLM) embeddings, the information-flow properties of attention mechanisms, and the emergence of semantic meaning during reasoning. The study introduces three interacting layers—physical (geometry/topology), information (directed correlations), and meaning (causal constraints)—and demonstrates that standard transformer embeddings can be analyzed using tools such as persistent homology, local intrinsic-dimension estimation, and density-matrix-like representations. Experimental results using GPT-2 small reveal:• non-trivial H₁ loops in embedding space,• non-uniform low-dimensional structure via TwoNN,• attention matrices acting as stochastic operators with meaningful spectral structure. While not directly tested in this work, prior observations in recent LLMs suggest that stronger alignment or reward-optimization pressure tends to reduce or flatten H₁ loop structures in their embedding spaces, indicating that topology-sensitive semantic pathways may be fragile under certain forms of post-training optimization. All code necessary for reproducibility is included in the appendix. The aim of this work is to encourage further exploration of the deep structure of learned representations and their potential connections to physics, information theory, and semantic emergence. For readers interested in the broader theoretical context of this work, a companion whitepaper provides an integrated view of how quantum phase structure, geometric manifolds, and topological invariants may relate to semantic stability in AI systems: Meaning Unification Framework: A Dual-Tier Whitepaper Connecting Quantum–Geometric Structures and AI Semantic StabilityURL: https://zenodo.org/records/17786069 This whitepaper positions the present paper as part of a larger two-tier research program, linking foundational mathematical structures (Tier I) with applications to alignment, semantic stability, and internal phenomenology (Tier II). Furthermore, the findings of this paper are expanded in the newly published Tier-I Paper 2, which investigates the persistent topological structures of LLM embedding spaces in greater depth: Persistent Topological Structures in LLM Embedding Spaces: From Geometric Analysis to ControllabilityURL: https://zenodo.org/records/17785728 Paper 2 builds directly upon the geometric and topological observations reported here, demonstrating how H₁ loop structures correspond to topologically stable subspaces and may serve as mathematically grounded axes for controllability and alignment preservation. Together, Papers 1 and 2 form the foundation of the Meaning Unification Framework’s Tier-I theoretical program.
LLM, topology, quantum-inspired models, causal emergence
LLM, topology, quantum-inspired models, causal emergence
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
