Powered by OpenAIRE graph
Found an issue? Give us feedback
ZENODOarrow_drop_down
ZENODO
Other literature type . 2026
License: CC BY
Data sources: Datacite
ZENODO
Other literature type . 2026
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

Negative Tomography: Structural Inference via Failure-First Primitives

A Universal Framework for Constraint-Based Inference in Sparse or Unbounded Spaces
Authors: 3 Pilgrim, LLC;

Negative Tomography: Structural Inference via Failure-First Primitives

Abstract

Why do training pipelines collapse, overfit, or fail to converge in sparse, high‑dimensional, or heavily degenerate spaces? Modern models operate in regimes where positive‑first inference—generating candidates, expanding features, and optimizing over allowed trajectories—scales catastrophically. In overparameterized systems, the search space is mostly impossible, not possible; failure modes carry more information than successes; and most training logic silently depends on structure inferred from what the system cannot do. This working paper introduces Negative Tomography, a systems‑level framework that treats structural failure modes as first‑class architectural primitives. Rather than attempting to construct solutions and prune them reactively, negative tomography begins by exhaustively characterizing forbidden configurations. These negative primitives—atomic forbiddances—carve high‑dimensional space into a dual representation whose complement reconstructs the viable core. The result is a general-purpose architecture for navigating degenerate manifolds, brittle optimization landscapes, and large unbounded spaces where positive‑generation approaches destabilize. The framework rests on four claims of direct relevance to training‑logic engineering:(1) negative constraint sets are smaller and more information‑dense than positive sets;(2) failure collapses search faster than success expands it;(3) exhaustive negative satisfaction converges toward a minimal symmetric fixed point;(4) any successful learning or inference process in unbounded spaces implicitly exploits negative primitives, even if implemented under different metaphors (regularizers, penalties, clipping rules, constraint propagation, or stability heuristics). Five architectural invariants define any valid realization: failure‑first ordering, non-pegging (structures remain revisable until negative closure), anchor acquisition (the process begins only once an informative boundary is detected), recursive inversion (meta‑failures act as higher-order signals), and convergence through selective constraint relaxation rather than maximal enforcement. These invariants recur across large-scale training systems: navigating overparameterized tensor spaces, managing degeneracy and flat regions, detecting symmetric cores in optimization, stabilizing updates under sparsity, reasoning through zero-probability transitions, and extracting invariant structure from erosive, failure-heavy regimes. Negative tomography provides a unifying vocabulary for these architecture‑level patterns. This working paper establishes the logical basis, necessity, and conceptual geometry of the method. It does not provide domain‑specific algorithms. A companion paper applies the invariants to overparameterized training (“Friction‑Guided Optimization: Negative Tomography of Overparameterized Tensor Spaces,” DOI: 10.5281/zenodo.18510602).

Keywords

symmetry fixed point, degenerate manifolds, sparsity‑tolerant architecture, structural failure signals, negative constraints, boundary‑driven navigation, overparameterized spaces, constraint‑driven search, collapse‑mode detection, failure‑first inference, training‑logic primitives

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Upload OA version
Are you the author of this publication? Upload your Open Access version to Zenodo!
It’s fast and easy, just two clicks!