Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Preprint
Data sources: ZENODO
addClaim

Understanding Provenance: A User Study on Explainability in Probabilistic Multi-Evidence Reasoning Systems

Authors: Alege, Aliyu Agboola;

Understanding Provenance: A User Study on Explainability in Probabilistic Multi-Evidence Reasoning Systems

Abstract

Background: Machine learning systems deployed in high-stakes domains often lack transparency in their reasoning processes, creating barriers to user trust and appropriate reliance. While explainable AI (XAI) methods provide feature importance, they fail to expose the complete reasoning chain from evidence to prediction. Methods: We conducted a user study (N = 25) with domain experts evaluating AI predictions for tax compliance risk assessment. Participants assessed 5 carefully selected cases while viewing detailed provenance explanations that included evidence chains, contribution weights, uncertainty distributions, and source credibility scores. Results: Participants demonstrated moderate to strong understanding of provenance-based explanations (M = 3.34 ± 0.92 on a 5-point scale), with corresponding trust levels (M = 3.38 ± 0.92). Analysis revealed substantial variation across case types: high-confidence correct predictions achieved 80% acceptance, while borderline and mixed-evidence cases showed more cautious evaluation (72–80% acceptance). Understanding and trust showed positive correlation (r = 0.394), suggesting that comprehension of reasoning processes influences confidence in AI predictions. Conclusions: Provenance ledgers enable domain experts to critically evaluate AI reasoning by exposing evidence chains, weights, and uncertainty. The variation in acceptance rates across cases demonstrates appropriate reliance—participants were more cautious with low-confidence and mixed-evidence predictions. This supports the value of transparent reasoning traces for human-AI collaboration in high-stakes decision-making. Keywords:Explainable AI (XAI), provenance tracking, AI transparency, human-AI collaboration, trust in AI, appropriate reliance, multi-evidence reasoning, interpretable machine learning, decision support systems, uncertainty visualization, evidence-based AI, user studies, high-stakes AI, tax compliance systems, reasoning traceability

Powered by OpenAIRE graph
Found an issue? Give us feedback