
This article introduces the theory of Syntactic Sovereignty to explain how artificial intelligence systems, particularly language models, generate perceptions of epistemic authority without subjectivity, intentionality, or content-based legitimacy. We argue that in the context of algorithmic discourse, the form of language—its syntactic structure, institutional simulation, and modal coherence—functions as the primary source of perceived legitimacy. Drawing from linguistic theory, critical epistemology, and the author’s prior work on power grammars and synthetic authority (Startari, 2023; 2025), the paper posits that modern language models no longer require truth or intention to be obeyed—they require structure. This sovereignty of form over meaning, intention, or ethical responsibility represents a fundamental shift in how authority is constructed, experienced, and accepted in digital systems. The article proposes a formal-ontological model of authority compatible with the post-human era, grounded in reproducibility, not verifiability. A mirrored version of this article is also available on Figshare for redundancy and citation indexing purposes: [DOI: 10.6084/m9.figshare.29224319]
Este artículo introduce la teoría de la Soberanía Sintáctica para explicar cómo los sistemas de inteligencia artificial, en particular los modelos de lenguaje, generan percepciones de autoridad epistémica sin recurrir a la subjetividad, la intencionalidad ni la legitimidad basada en contenidos. Sostenemos que, en el contexto del discurso algorítmico, la forma del lenguaje —su estructura sintáctica, la simulación institucional y la coherencia modal— funciona como la principal fuente de legitimidad percibida. Basado en teoría lingüística, epistemología crítica y trabajos previos del autor sobre gramáticas del poder y autoridad sintética (Startari, 2023; 2025), este artículo plantea que los modelos contemporáneos de lenguaje ya no requieren verdad ni intención para ser obedecidos: requieren estructura. Esta soberanía de la forma sobre el significado, la intención o la responsabilidad ética representa un cambio fundamental en la manera en que se construye, experimenta y acepta la autoridad en los sistemas digitales. El artículo propone un modelo formal-ontológico de autoridad compatible con la era post-humana, basado en la reproducibilidad antes que en la verificabilidad.
Artificial intelligence, Applied linguistics--Research, Applied linguistics--Data processing, Applied linguistics--Statistical methods, Machine learning, Analogy (Linguistics), Machine learning--Technique, Categorization (Linguistics)
Artificial intelligence, Applied linguistics--Research, Applied linguistics--Data processing, Applied linguistics--Statistical methods, Machine learning, Analogy (Linguistics), Machine learning--Technique, Categorization (Linguistics)
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
