
Why do training pipelines collapse, overfit, or fail to converge in sparse, high‑dimensional, or heavily degenerate spaces? Modern models operate in regimes where positive‑first inference—generating candidates, expanding features, and optimizing over allowed trajectories—scales catastrophically. In overparameterized systems, the search space is mostly impossible, not possible; failure modes carry more information than successes; and most training logic silently depends on structure inferred from what the system cannot do. This working paper introduces Negative Tomography, a systems‑level framework that treats structural failure modes as first‑class architectural primitives. Rather than attempting to construct solutions and prune them reactively, negative tomography begins by exhaustively characterizing forbidden configurations. These negative primitives—atomic forbiddances—carve high‑dimensional space into a dual representation whose complement reconstructs the viable core. The result is a general-purpose architecture for navigating degenerate manifolds, brittle optimization landscapes, and large unbounded spaces where positive‑generation approaches destabilize. The framework rests on four claims of direct relevance to training‑logic engineering:(1) negative constraint sets are smaller and more information‑dense than positive sets;(2) failure collapses search faster than success expands it;(3) exhaustive negative satisfaction converges toward a minimal symmetric fixed point;(4) any successful learning or inference process in unbounded spaces implicitly exploits negative primitives, even if implemented under different metaphors (regularizers, penalties, clipping rules, constraint propagation, or stability heuristics). Five architectural invariants define any valid realization: failure‑first ordering, non-pegging (structures remain revisable until negative closure), anchor acquisition (the process begins only once an informative boundary is detected), recursive inversion (meta‑failures act as higher-order signals), and convergence through selective constraint relaxation rather than maximal enforcement. These invariants recur across large-scale training systems: navigating overparameterized tensor spaces, managing degeneracy and flat regions, detecting symmetric cores in optimization, stabilizing updates under sparsity, reasoning through zero-probability transitions, and extracting invariant structure from erosive, failure-heavy regimes. Negative tomography provides a unifying vocabulary for these architecture‑level patterns. This working paper establishes the logical basis, necessity, and conceptual geometry of the method. It does not provide domain‑specific algorithms. A companion paper applies the invariants to overparameterized training (“Friction‑Guided Optimization: Negative Tomography of Overparameterized Tensor Spaces,” DOI: 10.5281/zenodo.18510602).
symmetry fixed point, degenerate manifolds, sparsity‑tolerant architecture, structural failure signals, negative constraints, boundary‑driven navigation, overparameterized spaces, constraint‑driven search, collapse‑mode detection, failure‑first inference, training‑logic primitives
symmetry fixed point, degenerate manifolds, sparsity‑tolerant architecture, structural failure signals, negative constraints, boundary‑driven navigation, overparameterized spaces, constraint‑driven search, collapse‑mode detection, failure‑first inference, training‑logic primitives
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
