
The striking generality and simplicity of Wilks’ method has made it popular for quantifying modeling uncertainty. A conservative estimate of the confidence interval is obtained from a very limited set of randomly drawn model sample values, with probability set by the assigned so-called stability. In contrast, the reproducibility of the estimated limits, or robustness, is beyond our control as it is strongly dependent on the probability distribution of model results. The inherent combination of random sampling and faithful estimation in Wilks’ approach is here shown to often result in poor robustness. The estimated confidence interval is consequently not a well-defined measure of modeling uncertainty. To remedy this deficiency, adjustments of Wilks’ approach as well as alternative novel, effective but less known approaches based on deterministic sampling are suggested. For illustration, the robustness of Wilks’ estimate for uniform and normal model distributions are compared.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 2 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
