
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=undefined&type=result"></script>');
-->
</script>Theorem of the Limit of Conditional Obedience Verification (TLOC): Structural Non-Verifiability in Generative Models This article presents the formal demonstration of a structural limit in contemporary generative models: the impossibility of verifying whether a system has internally evaluated a condition before producing an output that appears to comply with it. The theorem (TLOC) shows that in architecture based on statistical inference, such as large language models (LLMs), obedience cannot be distinguished from simulation if the latent trajectory π(x) lacks symbolic access and does not entail the condition C(x). This structural opacity renders ethical, legal, or procedural compliance unverifiable. The article defines the TLOC as a negative operational theorem, falsifiable only under conditions where internal logic is traceable. It concludes that current LLMs can simulate normativity but cannot prove conditional obedience. The TLOC thus formalizes the structural boundary previously developed by Startari in works on syntactic authority, simulation of judgment, and algorithmic colonization of time. Redundant archive copy: https://doi.org/10.6084/m9.figshare.29329184 — Maintained for structural traceability and preservation of citation continuity.
Artificial intelligence, History of philosophy, Artificial Intelligence/economics, neural-symbolic hybrid, falsifiability in AI, latent trajectory, Information Theory, Systems Theory, Linguistics/ethics, Artificial Intelligence/standards, Social Theory, simulation of judgment, Decision Theory, Artificial Intelligence, Artificial Intelligence/trends, Linguistics/trends, linguistic, Linguistics/standards, Artificial Intelligence/ethics, Ethical theories, Philosophy of language, algorithmic ethics, Linguistics/methods, Knot theory, Linguistics, Linguistics/classification, Linguistics/education, TLOC, epistemic opacity, symbolic evaluation, theorem, structural verifiability, Artificial Intelligence/classification, Grounded Theory, FOS: Languages and literature, Linguistics/instrumentation, Ethical Theory
Artificial intelligence, History of philosophy, Artificial Intelligence/economics, neural-symbolic hybrid, falsifiability in AI, latent trajectory, Information Theory, Systems Theory, Linguistics/ethics, Artificial Intelligence/standards, Social Theory, simulation of judgment, Decision Theory, Artificial Intelligence, Artificial Intelligence/trends, Linguistics/trends, linguistic, Linguistics/standards, Artificial Intelligence/ethics, Ethical theories, Philosophy of language, algorithmic ethics, Linguistics/methods, Knot theory, Linguistics, Linguistics/classification, Linguistics/education, TLOC, epistemic opacity, symbolic evaluation, theorem, structural verifiability, Artificial Intelligence/classification, Grounded Theory, FOS: Languages and literature, Linguistics/instrumentation, Ethical Theory
| citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
