
doi: 10.1111/jedm.12292
AbstractThe use of mixed‐format tests made up of multiple‐choice (MC) items and constructed response (CR) items is popular in large‐scale testing programs, including the National Assessment of Educational Progress (NAEP) and many district‐ and state‐level assessments in the United States. Rater effects, or raters’ scoring tendencies that result in performances receiving different scores than are warranted given their quality, are concerns for the interpretation of scores on CR items. However, there are few published studies in which researchers have systematically considered the impact of ignoring rater effects when they are present on estimates of student ability using large‐scale mixed‐format assessments. Using results from an analysis of NAEP data, we systematically explored the impacts of rater effects on student achievement estimates. Our results suggest that in conditions that reflect many large‐scale mixed‐format assessments, directly modeling rater effects yields more accurate student achievement estimates than estimation procedures that do not incorporate raters. We consider the implications of our findings for research and practice.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 11 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
