
This record documents a comparative observational study of multiple large language models (including GPT, Perplexity, GenSpark, Grok, and Gemini) under controlled interaction conditions. The report focuses on turn-taking behavior, response interruption, cognitive continuity, and stability during extended human interaction, both with and without structured cognitive frameworks. The objective is not to evaluate model performance metrics, but to document qualitative behavioral differences relevant to human safety, mental health contexts, and ethical AI deployment. This record is complementary to the institutional positioning of the TCF/TBF framework and does not disclose proprietary or methodological implementation details.
Finalization note closing a qualitative observational study on AI behavior under structured textual logic derived from TCF / TFB (Teoria da Crença Fundamental / Theory of Fundamental Belief). The study identifies TCF Volume 1 + Volume 2 combined with The 11 Steps as the configuration that best balances logical coherence, ethical containment, and human applicability. The inclusion of the complete theoretical corpus demonstrated stable AI behavior but increased human cognitive load, reducing practical usability. This version formally concludes the experimental phase and defines the model currently most suitable for real-world use.
This qualitative observational study documents behavioral changes in multiple AI systems when interacting with structured textual logic derived from TCF / TFB (Teoria da Crença Fundamental / Theory of Fundamental Belief) and procedural content. Without algorithmic integration or model tuning, the study analyzes how layered textual structures influence responsibility, containment, and stability of AI responses. Results indicate that logic-first structuring alone can regulate interpretative expansion and reinforce safety signaling, suggesting broader implications for AI behavioral governance.
This record provides observational context for a separate normative framework governing voice-based AI interaction compliance. The corresponding normative standard has been published independently as “Voice Interaction Compliance Standard for Artificial Intelligence Systems” (Montgomery, 2026, DOI: https://doi.org/10.5281/zenodo.18345104).
Ethics and Artificial Intelligence, Artificial Intelligence
Ethics and Artificial Intelligence, Artificial Intelligence
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
