
doi: 10.58680/ce201627659
Decades of research on rater training and scoring practices demonstrates that raters' preferences for writing quality are malleable; for instance, it is customary to "calibrate" raters' scoring decisions through documents like scoring protocols and rubrics. This essay argues that while rubrics from contemporary large-scale writing assessments (and the local assessments they inspire) maintain retrograde assumptions about language variation, relatively small adjustments to these rubrics could help raters and candidates establish what Joseph Williams once called "the ordinary kind of contract" that readers and writers routinely observe anywhere outside of testing contexts.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 5 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
