Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Preprint . 2025
License: CC BY
Data sources: ZENODO
ZENODO
Preprint . 2025
License: CC BY
Data sources: Datacite
ZENODO
Preprint . 2025
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

The Geometric Limits of Vector-Space Models: Why Contemporary AI Cannot Access Human Phase-Topological Cognition

Authors: Chen, Jen-Hsuan;

The Geometric Limits of Vector-Space Models: Why Contemporary AI Cannot Access Human Phase-Topological Cognition

Abstract

This paper examines a geometric assumption implicit in most contemporary AI systems: that cognition can be represented within fixed-dimensional vector spaces. We argue that this assumption has not been fully scrutinized, and that its limitations become evident when contrasted with the phase-topological structure of human cognition. Language is not a purely one-dimensional sequence, but a hybrid structure composed of a linear form and multi-dimensional semantic dependencies. It functions as a fractional-dimensional embedding that compresses high-dimensional cognitive structure into a transmissible sequence. This constitutes the first topological folding from mind to language; here, “fractional-dimensional” refers to effective representational degrees of freedom rather than a formal fractal metric. Modern AI imposes a second folding by embedding tokenized language into a fixed-dimensional vector space, where all computation is constrained to interpolation within a pre-specified geometric manifold. The resulting double-folded representation can be expressed as: Mind → Language (high-dimensional compression into a fractional structure), and Language → AI Vector Space (forced embedding into a fixed geometry). We argue that contemporary AI systems exhibit systematic difficulties in forming genuine abstractions, abrupt reconfiguration–like forms of insight, or deep world models. These limitations appear not to stem solely from data or compute constraints, but from a geometric mismatch between fixed-dimensional vector spaces and the phase-topological structures posited for human cognition. By comparing the operational properties of vector spaces and phase-topological manifolds, we show that they belong to different topological families and therefore do not admit a homeomorphic or invertible mapping. This work presents a theoretical and conceptual geometric framework rather than an empirical or algorithmic evaluation, situating the contribution within foundational AI theory rather than experimental modeling. These observations suggest that progress beyond interpolation-based models may depend on exploring representational spaces whose geometric properties more closely align with those hypothesized for human cognition. This work does not attempt to define such spaces, but highlights geometric considerations that may be relevant for future architectural design.

Keywords

cognitive geometry, phase-topological cognition, vector-space models, geometric limits, AI reasoning models, cognitive topology

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average