
This paper investigates the causes, implications, and mitigation strategies of AI hallucinations, with a focus on generative AI systems. This paper examines the phenomenon of AI hallucinations in large language models, analyzing root causes and evaluating mitigation strategies. We identify core contributors such as data quality issues, model complexity, lack of grounding, and limitations inherent in the generative process. The risks are examined in various domains, including legal, business, and user-facing applications, highlighting consequences like misinformation, trust erosion, and productivity loss. To address these challenges, we survey mitigation techniques including data curation, retrieval-augmented generation (RAG), prompt engineering, fine-tuning, multi-model systems, and human-in-the-loop oversight.
Artificial Intelligence/economics, Artificial Intelligence/standards, AI hallucinations, large language models, generative AI, reliability, mitigation strategies, retrieval-augmented generation
Artificial Intelligence/economics, Artificial Intelligence/standards, AI hallucinations, large language models, generative AI, reliability, mitigation strategies, retrieval-augmented generation
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
