
Variability is a formidable opponent of experimental management aimed at detecting spawner–recruit (SR) effects in a short time frame. I fitted Ricker SR models to 214 different SR data sets and found that high residual error variability was common. For each of these data sets, in an a priori power analysis, I estimated the power of experiments that used the change in Ricker a as the treatment effect and a temporal reference alone (no subpopulation references). Power was calculated using both bootstrap resampling and the usual normal theory methods. The analysis revealed that large residual variability severely limits the power to detect large changes in recruits per spawner (R/S). At the median level of error variability, achieving the design criteria of α = 0.05 and power = 0.8 required an experiment that doubled R/S to last about 20 years (assuming an equal number of treatment and control years). Several approaches to countering large error variability are discussed along with their limitations.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 2 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
