
Abstract In this paper we employ explainable artificial intelligence methods to identify unfairness in mortgage lending. Our aim is to reproduce credit lending decisions via explainable machine learning models and, then, assess whether such decisions are fair, particularly in terms of race. To this end, the paper employs data from New York state, deriving from the Home Mortgage Disclosure Act (HMDA). We contribute to the existing literature in two main ways. First, we assess fairness marginally, by means of parity measures based on the recently proposed S.A.F.E. AI metrics; but also conditionally, comparing the explanations in different population groups. Second, we extend the Shapley value approach measuring the contribution of each explanatory variable not to the predicted values but to precision and recall, thereby better taking into account data unbalancedness. Our empirical findings indicate the presence of racial disparities in loan approval rates. This underscores the need for increased efforts and targeted interventions to promote fair and equitable lending practices.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 1 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
