
The educational data mining community has long acknowledged the "challenge of interpretability" that has grown alongside the adoption of complex machine learning algorithms for educational purposes. Researchers have focused on a variety of approaches for addressing this concern, often turning to methods borrowed from the broader eXplainable AI (XAI) community. However, serious limitations with existing methods have led to calls for a re-imagining of what explainability should look like. The HEXED workshop aims to bring together a community of researchers who can work together to (1) develop a shared vision and common vocabulary for XAI in education, (2) share and disseminate work, (3) create robust methods for increasing interpretability, and (4) develop evaluation metrics for assessing explanations and model interpretability. We propose to achieve this through collaborative sense-making, research poster presentations, and lively discussions surrounding the current and future needs of the community.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
