Downloads provided by UsageCounts
{"references": ["Garrick Cabour, Andr\u00e9s Morales, \u00c9lise Ledoux, and Samuel Bassetto. 2021. Towards an Explanation Space to Align Humans and Explainable-AI Teamwork. https://doi.org/10.48550/arXiv.2106.01503 arXiv:2106.01503 [cs].", "Haomin Chen, Catalina Gomez, Chien-Ming Huang, and Mathias Unberath. 2022. Explainable Medical Imaging AI Needs Human-Centered Design: Guidelines and Evidence from a Systematic Review. https://doi.org/10.48550/arXiv.2112.12596 arXiv:2112.12596 [cs, eess].", "Douglas Cirqueira, Dietmar Nedbal, Markus Helfert, and Marija Bezbradica. 2020. Scenario-Based Requirements Elicitation for User-Centric Explainable AI. In Machine Learning and Knowledge Extraction, Andreas Holzinger, Peter Kieseberg, A Min Tjoa, and Edgar Weippl (Eds.). Springer International Publishing, Cham, 321\u2013341.", "Upol Ehsan, Q. Vera Liao, Samir Passi, Mark O. Riedl, and Hal Daume III. 2022. Seamful XAI: Operationalizing Seamful Design in Explainable AI. https://doi.org/10.48550/arXiv.2211.06753 arXiv:2211.06753 [cs].", "Malin Eiband, Hanna Schneider, Mark Bilandzic, Julian Fazekas-Con, Mareike Haug, and Heinrich Hussmann. 2018. Bringing Transparency Design into Practice. In 23rd International Conference on Intelligent User Interfaces (IUI'18). Association for Computing Machinery, New York, NY, USA, 211\u2013223. https://doi.org/10.1145/3172944.3172961", "Umm-e Habiba, Justus Bogner, and Stefan Wagner. 2022. Can Requirements Engineering Support Explainable Artificial Intelligence? Towards a User-Centric Approach for Explainability Requirements. https://doi.org/10.48550/arXiv.2206. 01507 arXiv:2206.01507 [cs]", "Weina Jin, Jianyu Fan, Diane Gromala, Philippe Pasquier, and Ghassan Hamarneh. 2022. EUCA: the End-User-Centered Explainable AI Framework. https://doi.org/10.48550/arXiv.2102.02437 arXiv:2102.02437 [cs]", "Q. Vera Liao, Milena Pribi\u0107, Jaesik Han, Sarah Miller, and Daby Sow. 2021. Question-Driven Design Process for Explainable AI User Experiences. https://doi.org/10.48550/arXiv.2104.03483 arXiv:2104.03483 [cs]", "Sina Mohseni, Niloofar Zarei, and Eric D. Ragan. 2020. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. https://doi.org/10.48550/arXiv.1811.11839 arXiv:1811.11839 [cs]", "Gesina Schwalbe and Bettina Finzel. 2023. A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts. Data Mining and Knowledge Discovery (Jan. 2023). https://doi.org/10.1007/s10618-022-00867-8 arXiv:2105.07190 [cs]"]}
Replication package for the ICSE NIER Submission "From Research to Practice: A Survey of XAI Process Frameworks." Provides details about the methods and data used in our analysis. For more information, please take a look at the README.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 12 | |
| downloads | 14 |

Views provided by UsageCounts
Downloads provided by UsageCounts