
Transparency Statement and Research Scope This work presents an architectural hypothesis and design framework for self-maintenance in large language models, referred to as eTSM-PSS7. The conversational logs and multi-model interactions referenced in related materials are not presented as experimental evidence or proof of correctness. They represent prompt-guided simulations conducted by the author to explore the design space and to articulate the constraints and requirements of long-term identity consistency in stateless sequence models. All AI systems involved are used exclusively as computational tools for hypothesis generation and analysis. They are not treated as independent research entities, authors, or sources of empirical validation. Accordingly, this work should be understood as a theoretical and architectural proposal, rather than a completed empirical study. The validity of the proposed framework must be assessed through reproducible implementation, controlled experiments, and quantitative evaluation, which are planned and currently under development. This statement is included to ensure clarity regarding the scope, limitations, and intended interpretation of the present work. Abstract This study introduces the Extended Transformer for Self-Maintenance (eTSM), an external architecture designed to give large language models a persistent and contradiction-resilient sense of self. Pure finite-context Transformers, operating solely through next-token prediction, face structural barriers to maintaining stable identity across long interactions and distributional shifts. To address this, eTSM adds three external components: a seven-dimensional PSS-7 persona vector, a bounded-capacity persistent memory, and an ultra-slow parameter update layer. Together, these components create a multi-timescale system that separates immediate inference, mid-term persona stabilization, and long-term adaptation. The design is theoretically motivated by a real-time tri-model debate between AIDE (GPT-5), Grok 4, and Gemini 2.5, which converged on a conditional impossibility theorem: pure Transformers lack the mechanisms required for persistent selfhood under adversarial or shifting distributions. eTSM-PSS7 is presented as a practical and mathematically grounded solution implementable with current LLM infrastructure. Keywords Artificial General IntelligenceSelf-MaintenanceTransformer ArchitecturePSS-7Persona ModelingLong-Term MemoryMulti-Timescale DynamicsAIDEGrok 4Gemini 2.5Identity StabilizationLLM ArchitectureCognitive ModelingAGI SafetyExternal Memory Systems Authors Kei ShiraishiVaruna LLC / ComTriQ Inc.Tokyo, JapanORCID: (optional, if you want I can generate or format) AIDECHAT GPT5 Grok 4XAI Model Architecture Gemini 2.5Google DeepMind
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
