Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Archivio istituziona...arrow_drop_down
image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
https://doi.org/10.1007/978-3-...
Part of book or chapter of book . 2021 . Peer-reviewed
License: Springer TDM
Data sources: Crossref
http://dx.doi.org/10.1007/978-...
Part of book or chapter of book
License: Springer TDM
Data sources: Sygma
http://dx.doi.org/10.1145/3461...
Part of book or chapter of book . 2021
http://dx.doi.org/10.1007/978-...
Part of book or chapter of book . 2021
versions View all 5 versions
addClaim

Principles of Explainable Artificial Intelligence

Authors: Guidotti, R.; Monreale, A.; Pedreschi, Dino; Giannotti, Fosca;

Principles of Explainable Artificial Intelligence

Abstract

The last decade has witnessed the rise of a black box society where obscure classification models are adopted by Artificial Intelligence systems (AI). The lack of explanations of how AI systems make decisions is a key ethical issue to their adoption in socially sensitive and safety-critical contexts. Indeed, the problem is not only for lack of transparency but also for possible biases inherited by the AI from prejudices hidden in the training data. Thus, the research in eXplainable AI (XAI) has recently caught much attention. The applications in which AI systems are employed are various. Therefore, there are many requirements for different types of explanations for different users. We survey the existing proposals in the literature by discussing which are the principles of XAI. In addition, we illustrate different types of explanations returned by established explainers. Finally, we discuss their usability and how they can be exploited in real-world applications.

Country
Italy
Keywords

Ethical data mining; Explainable artificial intelligence; Explanation methods; Interpretable machine learning; Transparent models

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    30
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Top 10%
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Top 10%
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Top 10%
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
30
Top 10%
Top 10%
Top 10%
Upload OA version
Are you the author of this publication? Upload your Open Access version to Zenodo!
It’s fast and easy, just two clicks!