
This position paper proposes a systems-theoretic reframing of AI alignment as a problem of interactional coherence rather than solely constraint enforcement. Drawing on dynamical systems theory and long-context deployment observations, the paper introduces the concept of functional central identity attractors as a framework for understanding behavioral stability in large language models. The approach is complementary to existing safety mechanisms and emphasizes structural coherence as a contributor to reliability in persistent, long-context systems.
