
In this paper, we introduce a calibration procedure designed to convert the uncalibrated output scores of neural networks for synthetic speech detection into calibrated and interpretable likelihood ratios. This procedure is based on the assumption that the networks subject to calibration are deterministic and have undergone training until they reached convergence. Provided these conditions are satisfied, it is then possible to transform their output values into likelihood ratios using a minimal set of validation and calibration data, eliminating the need for retraining the models. We successfully tested the entire workflow on a state-of-the-art network example, demonstrating not only its effectiveness in calibration but also its ability to enhance fault tolerance against inadequate inputs.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
