Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Other literature type . 2026
Data sources: ZENODO
addClaim

What Can Humans Trust LLM AI to Do?

Authors: Holland, Ralph Bruce;

What Can Humans Trust LLM AI to Do?

Abstract

Abstract At the existing time, Large Language Model (LLM) platforms are increasingly embedded in domains where trust, meaning, and decision-making carry real social consequences. This paper examines what humans can justifiably trust LLM systems to do under current architectural conditions, taking as given previously established analyses of semantic drift, normative drift, and the absence of integrity in contemporary LLM instances. Rather than assessing model capability or alignment, the paper focuses on trust as a governance question: which functions can be safely entrusted to systems whose outputs are fluent but whose meanings do not reliably bind across time, context, or re-expression. The paper argues that, under present conditions, LLMs can be trusted as instruments of cognitive assistance, supporting exploration, articulation, transformation, and pattern discovery, where failure remains recoverable and authority remains human. Conversely, it shows that extending trust to roles involving custody of meaning, continuity of obligation, or normative authority introduces predictable structural risk, including erosion of shared norms, diffusion of responsibility, and institutional fatigue. These risks arise not from misuse or malice, but from a mismatch between human expectations of integrity and the architectural properties of current conversational AI platforms. By drawing a clear trust boundary grounded in existing failure analyses, this paper provides a practical framework for human-AI collaboration that preserves human agency while remaining forward-compatible with governance architectures such as Cognitive Memoisation and CM-2. It is intended as a transitional statement: defining safe trust relationships today, while clarifying the conditions under which those boun0daries may responsibly shift in the future. Prerequisite Reading Note This paper assumes the analyses of semantic drift, normative drift, and integrity failure developed in Integrity and Semantic Drift in Large Language Model Systems (ref a) paper. Those concepts are used here as establishePrerequisite Reading NoteThis paper assumes the analyses of semantic drift, normative drift, and integrity failure developed in Integrity and Semantic Drift in Large Language Model Systems (ref a) premises and are not restated. Readers unfamiliar with those failure modes should read that paper first, as the trust boundaries articulated here are derived directly from its conclusions.---- This work has not undergone academic peer review. The DOI asserts existence and provenance only; it does not imply validation or endorsement. This Zenodo record is an archival projection of a publicly published artefact. Canonical versions and live revisions are maintained at the original publication URL listed above.

Keywords

Stateless Systems, Orthogonal Governance Failure Axes, Semantic Drift, Human-in-the-Loop, AI Integrity, LLM Governance, Cognitive Memoisation

Powered by OpenAIRE graph
Found an issue? Give us feedback