Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Article
Data sources: ZENODO
addClaim

AI che rispondono sempre

Authors: Minasi della Rocca, Alberto;

AI che rispondono sempre

Abstract

This article examines a structural property of generative artificial intelligence systems that is often underestimated: their inability to refrain from producing an output. Contrary to the common focus on error or accuracy, the paper argues that generative models are architecturally designed to always generate a response, even when the underlying premises are incomplete, ambiguous or incorrect. This behavior does not derive from malfunction, but from the probabilistic logic of token prediction and embedding-based similarity that governs these systems. Moving from a technical description of this mechanism, the analysis shifts to its legal implications, with particular attention to the concept of causation. The article suggests that a system structurally oriented to transform any input into a plausible output may constitute an autonomous causal factor in the production of legally relevant events. In this perspective, the issue is no longer limited to the quality of the response, but concerns the inevitability of response itself, and its potential role in generating harm. The current regulatory approach, including the AI Act, is critically assessed as primarily focused on outputs and risk mitigation, without fully addressing the architectural source of the problem. The article concludes that a reliable AI system should not only be capable of generating answers, but also of suspending them when the conditions for a meaningful response are not met. The problem is not error. It is architecture.

Powered by OpenAIRE graph
Found an issue? Give us feedback