
This preprint presents a reproducible framework for modeling and inferring answer sequences in human-authored multiple-choice examinations. The approach combines positional priors, sequential pattern mining, conservative augmentation, ensemble learning, anchor-conditioned inference, and historical-similarity corrections. The study demonstrates that instructor-specific answer sequences deviate from randomness in measurable ways and that such structure can be exploited for prediction, calibration, and uncertainty quantification. The manuscript includes formal definitions, algorithmic details, evaluation protocols, and an extensive discussion of applicability and limitations.
Machine learning
Machine learning
| citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
