
Constructing beliefs about the world usually requires simplifying assumptions. We analyze the beliefs of agents who make reasonable assumptions to model a complex situation and who make predictions conditional on those assumptions. Our theory identifies tight connections between model uncertainty (the extent to which different models lead to different predictions), overprecision (too-small variance estimates), and interpersonal disagreement (variance in mean predictions). We test these predictions in an experiment in which participants view a scatterplot and report mean and variance estimates for out-of-sample predictions. Consistent with our theory, different people focus on different plausible models but provide reasonable estimates of uncertainty conditional on their model. As a result, model uncertainty increases both overprecision and disagreement. Outside of the lab, we find similar evidence in the Survey of Professional Forecasters, including that overprecision positively covaries with disagreement.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
