
pmid: 30506832
Publication bias and other forms of outcome reporting bias are critical threats to the validity of findings from research syntheses. A variety of methods have been proposed for detecting selective outcome reporting in a collection of effect size estimates, including several methods based on assessment of asymmetry of funnel plots, such as the Egger's regression test, the rank correlation test, and the Trim‐and‐Fill test. Previous research has demonstated that the Egger's regression test is miscalibrated when applied to log‐odds ratio effect size estimates, because of artifactual correlation between the effect size estimate and its standard error. This study examines similar problems that occur in meta‐analyses of the standardized mean difference, a ubiquitous effect size measure in educational and psychological research. In a simulation study of standardized mean difference effect sizes, we assess the Type I error rates of conventional tests of funnel plot asymmetry, as well as the likelihood ratio test from a three‐parameter selection model. Results demonstrate that the conventional tests have inflated Type I error due to the correlation between the effect size estimate and its standard error, while tests based on either a simple modification to the conventional standard error formula or a variance‐stabilizing transformation both maintain close‐to‐nominal Type I error.
PsyArXiv|Meta-science, bepress|Social and Behavioral Sciences|Psychology|Quantitative Psychology, Odds Ratio, PsyArXiv|Social and Behavioral Sciences|Quantitative Methods|Statistical Methods, Animals, Humans, Computer Simulation, bepress|Life Sciences|Research Methods in Life Sciences, Likelihood Functions, Models, Statistical, Reproducibility of Results, Reference Standards, PsyArXiv|Social and Behavioral Sciences, Research Design, Data Interpretation, Statistical, Sample Size, bepress|Social and Behavioral Sciences, Regression Analysis, Programming Languages, PsyArXiv|Social and Behavioral Sciences|Quantitative Methods, Monte Carlo Method, Publication Bias
PsyArXiv|Meta-science, bepress|Social and Behavioral Sciences|Psychology|Quantitative Psychology, Odds Ratio, PsyArXiv|Social and Behavioral Sciences|Quantitative Methods|Statistical Methods, Animals, Humans, Computer Simulation, bepress|Life Sciences|Research Methods in Life Sciences, Likelihood Functions, Models, Statistical, Reproducibility of Results, Reference Standards, PsyArXiv|Social and Behavioral Sciences, Research Design, Data Interpretation, Statistical, Sample Size, bepress|Social and Behavioral Sciences, Regression Analysis, Programming Languages, PsyArXiv|Social and Behavioral Sciences|Quantitative Methods, Monte Carlo Method, Publication Bias
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 244 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 0.1% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 1% |
