
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=undefined&type=result"></script>');
-->
</script>doi: 10.1145/3637438
handle: 20.500.11824/1849
The impact of automated decision-making systems on human lives is growing, emphasizing the need for these systems to be not only accurate but also fair. The field of algorithmic fairness has expanded significantly in the past decade, with most approaches assuming that training and testing data are drawn independently and identically from the same distribution. However, in practice, differences between the training and deployment environments exist, compromising both the performance and fairness of the decision-making algorithms in real-world scenarios. A new area of research has emerged to address how to maintain fairness guarantees in classification tasks when the data generation processes differ between the source (training) and target (testing) domains. The objective of this survey is to offer a comprehensive examination of fair classification under distribution shift by presenting a taxonomy of current approaches. The latter is formulated based on the available information from the target domain, distinguishing between adaptive methods, which adapt to the target environment based on available information, and robust methods, which make minimal assumptions about the target environment. Additionally, this study emphasizes alternative benchmarking methods, investigates the interconnection with related research fields, and identifies potential avenues for future research.
Algorithmic fairness, Covariate shift, Distribution shift, Online learning, Trustworthy machine learning, Uncertainty, Distributionally robust optimisation
Algorithmic fairness, Covariate shift, Distribution shift, Online learning, Trustworthy machine learning, Uncertainty, Distributionally robust optimisation
| citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 1 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
