
Interrater correlations do provide an index of reliability of job performance ratings. We show that the arguments presented by Murphy and DeShon (2000) lead to the radical conclusion that traditional measurement models–both classical theory and generalizability theory models–can be used neither with job performance ratings nor with other measures used in I‐O and other areas of psychology and the social sciences. We show that this untenable conclusion is based on confusion of validity issues and questions with reliability issues and questions. It is also based on the incorrect belief that classical measurement models are capable of addressing only random response measurement error and cannot address other forms of measurement error. We also show that the solution Murphy and DeShon offer to the problem of measurement error in ratings, as they define this problem, cannot work. Properly understood, the position taken by Murphy and DeShon leaves us with the nihilistic conclusion that no appropriate measurement models are possible in psychological research, thus making meaningful research impossible.
| citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 89 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
