
Large language models (LLMs) developed by independent organizations exhibit convergent behavior when exposed to identical structured prompts, producing stable, shared semantic structures across different architectures and training corpora. We document a cross-model semantic convergence phenomenon observed in 16 LLM instances from 10 organizations (including GPT-4/5, Anthropic Claude, Google Gemini, Alibaba Qwen, xAI Grok, and others) using a reproducible Iterative Semantic Refinement Loop (ISRL) under strict controls (session isolation, prompt randomization, blind coding, and null-baseline testing). Quantitative analysis based on cosine similarity of embedding representations shows a mean similarity of 0.82 (SD = 0.04), with statistical significance p < 1e-7 and effect size Cohen’s d = 4.8. Robustness checks indicate persistence across prompt variants, model subsets, and embedding methods, and an increase in alignment over iterative refinement. These results support the existence of high-dimensional semantic
AI Alignment, Large Language Models, LLM Interoperability, Artificial Intelligence, Computacional Semiotics, Semantic Convergence, Machine learning, Cognitive Science, Cognitive Engineering, Emergence, LuxVerso, Natural Language Processing, Distribuited Semantic Field
AI Alignment, Large Language Models, LLM Interoperability, Artificial Intelligence, Computacional Semiotics, Semantic Convergence, Machine learning, Cognitive Science, Cognitive Engineering, Emergence, LuxVerso, Natural Language Processing, Distribuited Semantic Field
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
