
doi: 10.2307/748571
Eastman's (1975) recent expression in the JRME of concern over so few replication studies is a concern that should be shared by all who have an interest in research. Replication is critical in assessing the "significance" of research results. A correlation coefficient of .20, which is statistically significant at the .01 level, will be much more "significant" if we can demonstrate by replication that the same result occurs again and again. By replicating, we help rule out the possibility that a Type I error (we rejected the null hypothesis when it was true) occurred in the original experiment. Furthermore, by independently replicating with different subjects, at different times and places, we also are helping to increase the generalizability of any "significant" results we do obtain. Replication has been pleaded for elsewhere with varying results. Bauernfeind (1968) documented replication in the physical sciences, noted the lack of replicated studies in the field of education, and succinctly pointed out that this is a rather strange outcome considering that "more things can go wrong in a behavioral research project than in a physical research project" (p. 126). Thompson (1974), in making "A Plea for Replication," mentioned his efforts urging candidates for the MA degree to do research replication, and then described a replication of one of his own studies that resulted in only partial confirmation of the original findings. Weitz (1956) contended that many of the "classical findings" we have carried for years could stand replication. (Or as he notes, perhaps they couldn't stand it.) His plea was at least partially successful when the Journal of Experimental Psychology (Melton, 1957) adopted a policy of publishing "brief reports of simple extensions of previously reported findings and replications of previously reported experiments" (p. 1). With the periodic pleas for replication appearing in the literature (demonstrating at least journal editors' willingness to print pleas for replication), why do we see so few replication studies published? Eastman (1975) lists
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 1 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
