Downloads provided by UsageCounts
A variety of automated software fault prediction techniques was proposed in recent years, in particular for the important class of spreadsheet programs. Software fault prediction techniques commonly create ranked lists of "suspicious'' program statements for developers to inspect. Existing research, however, suggests that solely providing such ranked lists may not always be effective. In particular, it was found that developers often seek for explanations for the outcomes provided by a debugging tool and that such explanations may be key for developers to trust and rely on the tool. Research on how to explain the outcomes of fault prediction techniques, which are often based on complex machine learning models, is scarce, and little is known regarding how such explanations are perceived by developers. With this work, we aim to narrow this research gap and study the perception of different forms of explanations by spreadsheet users in the context of a machine learning based fault prediction tool. A between-subjects user study (N=120) revealed significant differences between the explored explanation styles. In particular, we found that well-designed natural language explanations can indeed help users better understand why certain spreadsheet cells were marked by the debugging tool and that such explanations can be effective to increase the users' trust compared to a black box system.
User Study, Explanations, Spreadsheets, Explainable AI, Fault Prediction
User Study, Explanations, Spreadsheets, Explainable AI, Fault Prediction
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 5 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
| views | 5 | |
| downloads | 4 |

Views provided by UsageCounts
Downloads provided by UsageCounts