
doi: 10.2139/ssrn.2282675
The aim of this paper is to address the validity of default probability models calibrated on a dataset including a very low (or none) number of defaults. The few approaches, proposed by the specialized literature are based on the confidence intervals computed via probabilistic, Bayesian or analytic methods. We propose a benchmark of these methods and we explore the cases for which they are the most suited for. We propose a new method for computing the upper bound of the default probability in low default portfolios, employing resampling approaches. For the special case of no default portfolios we investigate an adjustment based on non-Gaussian behavior of the distance to default. We address also the issue concerning the dependency between defaults using non-Gaussian copulas.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
