
We propose a formal paradigm of ontological engineering for Artificial General Intelligence (AGI) based on the Conflict–Moment–Impulse (CMI) theory. Ethical invariants are encoded as a preserved G-functional, ensuring alignment not through external constraints but through the system’s internal dynamics. We formalize partial ethical autonegation for gray-zone dilemmas and prove a theorem of non-removable constraints. Implementation pathways include neuromorphic embeddings, cryptographic commitments, and evolutionary design. This approach reframes AGI safety as an ontological firewall: crises are processed through CMIresets that preserve invariants while enabling adaptation.
metamonism, Ontological engineering, AGI safety, Conflict–Moment–Impulse, CMI theory, Ontological firewall, autonegation
metamonism, Ontological engineering, AGI safety, Conflict–Moment–Impulse, CMI theory, Ontological firewall, autonegation
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
