Search Results
Working Paper
One Threshold Doesn’t Fit All: Tailoring Machine Learning Predictions of Consumer Default for Lower-Income Areas
Modeling advances create credit scores that predict default better overall, but raise concerns about their effect on protected groups. Focusing on low- and moderate-income (LMI) areas, we use an approach from the Fairness in Machine Learning literature — fairness constraints via group-specific prediction thresholds — and show that gaps in true positive rates (% of non-defaulters identified by the model as such) can be significantly reduced if separate thresholds can be chosen for non-LMI and LMI tracts. However, the reduction isn’t free as more defaulters are classified as good risks, ...
Working Paper
Measuring Fairness in the U.S. Mortgage Market
Black Americans are both substantially more likely to have their mortgage application rejected and substantially more likely to default on their mortgages than White Americans. We take these stark inequalities as a starting point to ask the question: How fair or unfair is the U.S. mortgage market? We show that the answer to this question crucially depends on the definition of fairness. We consider six competing and widely used definitions of fairness and find that they lead to markedly different conclusions. We then combine these six definitions into a series of stylized facts that offer a ...