
This preprint introduces the Collapse Index (CI) CrackTest, a morphology-aligned perturbation framework for evaluating robustness and collapse inheritance in large language models (LLMs). The study demonstrates that CI CrackTest, a bounded and lightweight perturbation protocol originally developed for brittleness diagnostics, can quantify systematic error propagation across morphologically-related variants in controlled classification tasks. Using a 186-variant perturbation suite spanning eight morphological families (lexical, syntactic, ambiguity, semantic, compression, noise, boundary, contrastive), the evaluation analyzes collapse inheritance, family-specific brittleness, and confidence behavior under perturbation. Across three frontier models (GPT-4o, Claude Haiku 4.5, Gemini 2.5 Flash), the framework identifies consistent robustness signatures, including 11–16% collapse inheritance rates, 9–28% semantic brittleness, and 0.10–0.17 confidence masking deltas, independent of architecture or training regime. The framework is presented as a behavioral diagnostic tool for robustness analysis. CI CrackTest does not expose internal perturbation heuristics, variant-generation mechanisms, or scoring systems. All reported findings are based solely on externally observable model outputs under controlled morphological perturbation.All internal algorithms, classification mechanisms, and inference procedures remain proprietary to Collapse Index Labs. Project page: https://collapseindex.orgLicensed under CC BY-NC-ND 4.0.
Computer and Information Sciences, morphological perturbation, reliability, robustness evaluation, perturbation families, Collapse Index, domain-agnostic framework, confidence masking, adversarial testing, brittleness metrics, Machine Learning, Artificial Intelligence, semantic brittleness, LLM evaluation, stability metrics, CrackTest, model robustness, language model robustness, perturbation analysis, behavioral evaluation, systematic error propagation, Natural Language Processing
Computer and Information Sciences, morphological perturbation, reliability, robustness evaluation, perturbation families, Collapse Index, domain-agnostic framework, confidence masking, adversarial testing, brittleness metrics, Machine Learning, Artificial Intelligence, semantic brittleness, LLM evaluation, stability metrics, CrackTest, model robustness, language model robustness, perturbation analysis, behavioral evaluation, systematic error propagation, Natural Language Processing
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
