Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Preprint . 2026
Data sources: ZENODO
addClaim

Gyroscope-Live: A Meta-Architectural Control System for Stable, Auditable Human–AI Joint Cognition

Authors: Bojanowski, Łukasz;

Gyroscope-Live: A Meta-Architectural Control System for Stable, Auditable Human–AI Joint Cognition

Abstract

Large language models (LLMs) operate primarily as open-loop generative systems, producing fluent outputs without intrinsic mechanisms for trajectory control, role separation, or auditability. While powerful, this mode of operation gives rise to recurring structural failure modes, including hallucination propagation, semantic drift, responsibility ambiguity, and non-reproducibility. Gyroscope-Live introduces a meta-architectural control system for human–AI joint cognition. Rather than generating content, Gyroscope governs how generative systems are used by enforcing explicit cognitive roles, structured execution phases, and closed-loop control over time. The architecture separates planning, critique, execution, and governance into distinct, inspectable functions, supported by layered control (BIOS, Kernel, Delta, Session) and decision logging. This enables auditability, continuity, and error containment independently of model internals. Gyroscope-Live is not an AI model, agent, or optimizer. It is a model-agnostic control architecture designed to stabilize generative systems and make them usable as reliable cognitive instruments in real-world, long-horizon work. The system is compatible with normative collaboration frameworks such as the Interference Intelligence Layer (I.I.L), which defines ethical and constitutional principles for human–AI cooperation, while Gyroscope operationalizes them at the control level. This whitepaper presents the conceptual architecture, execution loop, failure modes, containment mechanisms, and evolutionary context of Gyroscope-Live as a foundational control layer for responsible human–AI co-creation.

Keywords

role separation, generative AI, closed-loop systems, AI safety, human–AI collaboration, cognitive architecture, cognitive orchestration, meta-architecture, auditability, control systems, AI governance

Powered by OpenAIRE graph
Found an issue? Give us feedback