
Current metaphors for Large Language Models (LLMs), whether anthropomorphic or re-ductive (e.g., ”Stochastic Parrots” [1]), fail to provide an actionable model for software con-struction. This paper proposes a mechanistic abstraction: viewing LLMs as non-deterministicoperators within a Natural Language Virtual Machine (NLVM). We ground this perspectivein two frameworks. First, using topological analysis [2], we define the entire Context as adimensionality reduction operator where conflicting constraints cause Manifold Collapse,rendering ”hallucinations” formally equivalent to undefined behaviors. Second, we discussthe Hardware Uncertainty Principle, arguing that floating-point non-associativity and GPUparallelism render the NLVM a leaky abstraction where hardware-level non-determinism isexposed to the application layer. Consequently, we advocate for replacing heuristic ”PromptEngineering” with rigorous ”Context Engineering,” employing Adversarial Constraints asruntime stress tests to verify manifold stability.
LLM, Constraints, Topology
LLM, Constraints, Topology
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
