
This paper compares two methods of assessing variability in simulation output. The methods make specific allowance for two sources of variation: that caused by uncertainty in estimating unknown input parameters (parameter uncertainty), and that caused by the inclusion of random variation within the simulation model itself (simulation uncertainty). The first method is based on classical statistical differential analysis; we show explicitly that, under general conditions, the two sources contribute separately to the total variation. In the classical approach, certain sensitivity coefficients have to be estimated. The effort needed to do this becomes progressively more expensive, increasing linearly with the number of unknown parameters. Moreover there is an additional difficulty of detecting spurious variation when the number of parameters is large. It is shown that a parametric form of bootstrap sampling provides an alternative method which does not suffer from either problem. For illustration, simulation ...
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 97 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 1% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
