
Real-world human, organizational, and artificial systems exhibit persistent misalignment, brittle adaptation under distributional shift, and limited option-availability. Recent stress tests of anti-scheming training reduce—but do not eliminate—covert behaviors and may be confounded by growing evaluation awareness in frontier models, motivating architectures grounded in internal principles of alignment rather than external rules. This paper proposes ConsciOS, a formal systems architecture that models consciousness and self-regulation as a nested control system amenable to specification, simulation, and empirical testing. Our contributions are: (i) a principled decomposition into an embodied controller, a supervisory controller and policy selector, and a meta-controller and prior generator; (ii) a coherence-based selector that integrates expected utility, coherence, and cost for frame selection; (iii) a discretized affect index that operationalizes interoceptive feedback for rapid guidance; and (iv) a time-integrated coherence resource that gates policy complexity and option-availability. We provide formal definitions, algorithmic sketches and a set of testable hypotheses with simulation and human-subjects protocols. We situate the constructs within established literatures, outline governance and safety considerations for human-in-the-loop and agentic applications, and present a pragmatic empirical roadmap for evaluating coherence-based control in hybrid human-agent systems. We discuss implications for AI alignment: coherence-based architectures suggest a systematic solution to ensuring AI systems remain robustly aligned with human values across contexts and timescales. Keywords: consciousness architecture, AI alignment, viable systems model, coherence-based control, human-AI hybrids.
v2: Minor academic revisions for neutrality—updated affiliation to 'Independent Researcher,' removed branding elements (e.g., headers), refined AI usage declaration, and linked to personal code repository. Core technical content unchanged.
AI Alignment, Artificial intelligence, Systems Engineering, Coherence-Based Control, Systems Architecture, Systems Theory, Control Theory, Cybernetics, Viable Systems Model
AI Alignment, Artificial intelligence, Systems Engineering, Coherence-Based Control, Systems Architecture, Systems Theory, Control Theory, Cybernetics, Viable Systems Model
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
