
A publication-grade working paper proposing Massively Documented LLM Hallucination (MDLH) as a formal epistemic-risk framework for AI-assisted scientific discovery, using the LSC neutrino research line as a dual-interpretation case study. The work does not claim that LSC is validated physics or that it is false; it separates unvalidated physics from AI-generated epistemic artifacts. The MDLH archive now records the 6.2.2 repair path explicitly. Public note: the correction makes the isotropic trace explicit, keeps the directional term traceless, removes the mixed base 1 / E^2 usage, and anchors sidereal tests in a fixed celestial frame.
