
The increasing prevalence and sophistication of automated tutoring systems necessitates the development of new methods for their evaluation and improvement. In particular, data-driven methods offer the opportunity to provide teachers with insight about student interactions with online systems, facilitating their improvement to maximise their educational value. In this paper, we present a new technique for analysing feedback in an automated programming tutor. Our method involves first clustering submitted programs with the same functionality together, then applying sequential pattern mining and graphically visualising student progress through an exercise. Using data from a beginner Python course, we demonstrate how this method can be applied to programming exercises to analyse student approaches, responses to feedback, areas of greatest difficulty and repetition of mistakes. This process could be used by teachers to more effectively understand student behaviour, allowing them to adapt both traditional and online teaching materials and feedback to optimise student experiences and outcomes.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 14 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
