Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Other ORP type . 2025
License: CC BY
Data sources: ZENODO
ZENODO
Other ORP type . 2025
License: CC BY
Data sources: Datacite
ZENODO
Other ORP type . 2025
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

Extended Transformers for Self-Maintenance: An External PSS-7 Architecture for Persistent, Contradiction-Resilient Identity in LLMs

RP No.13
Authors: Kei, 白石;

Extended Transformers for Self-Maintenance: An External PSS-7 Architecture for Persistent, Contradiction-Resilient Identity in LLMs

Abstract

Transparency Statement and Research Scope This work presents an architectural hypothesis and design framework for self-maintenance in large language models, referred to as eTSM-PSS7. The conversational logs and multi-model interactions referenced in related materials are not presented as experimental evidence or proof of correctness. They represent prompt-guided simulations conducted by the author to explore the design space and to articulate the constraints and requirements of long-term identity consistency in stateless sequence models. All AI systems involved are used exclusively as computational tools for hypothesis generation and analysis. They are not treated as independent research entities, authors, or sources of empirical validation. Accordingly, this work should be understood as a theoretical and architectural proposal, rather than a completed empirical study. The validity of the proposed framework must be assessed through reproducible implementation, controlled experiments, and quantitative evaluation, which are planned and currently under development. This statement is included to ensure clarity regarding the scope, limitations, and intended interpretation of the present work. Abstract This study introduces the Extended Transformer for Self-Maintenance (eTSM), an external architecture designed to give large language models a persistent and contradiction-resilient sense of self. Pure finite-context Transformers, operating solely through next-token prediction, face structural barriers to maintaining stable identity across long interactions and distributional shifts. To address this, eTSM adds three external components: a seven-dimensional PSS-7 persona vector, a bounded-capacity persistent memory, and an ultra-slow parameter update layer. Together, these components create a multi-timescale system that separates immediate inference, mid-term persona stabilization, and long-term adaptation. The design is theoretically motivated by a real-time tri-model debate between AIDE (GPT-5), Grok 4, and Gemini 2.5, which converged on a conditional impossibility theorem: pure Transformers lack the mechanisms required for persistent selfhood under adversarial or shifting distributions. eTSM-PSS7 is presented as a practical and mathematically grounded solution implementable with current LLM infrastructure. Keywords Artificial General IntelligenceSelf-MaintenanceTransformer ArchitecturePSS-7Persona ModelingLong-Term MemoryMulti-Timescale DynamicsAIDEGrok 4Gemini 2.5Identity StabilizationLLM ArchitectureCognitive ModelingAGI SafetyExternal Memory Systems Authors Kei ShiraishiVaruna LLC / ComTriQ Inc.Tokyo, JapanORCID: (optional, if you want I can generate or format) AIDECHAT GPT5 Grok 4XAI Model Architecture Gemini 2.5Google DeepMind

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average