Working Paper
Understanding Models and Model Bias with Gaussian Processes
Abstract: Despite growing interest in the use of complex models, such as machine learning (ML) models, for credit underwriting, ML models are difficult to interpret, and it is possible for them to learn relationships that yield de facto discrimination. How can we understand the behavior and potential biases of these models, especially if our access to the underlying model is limited? We argue that counterfactual reasoning is ideal for interpreting model behavior, and that Gaussian processes (GP) can provide approximate counterfactual reasoning while also incorporating uncertainty in the underlying model’s functional form. We illustrate with an exercise in which a simulated lender uses a biased machine model to decide credit terms. Comparing aggregate outcomes does not clearly reveal bias, but with a GP model we can estimate individual counterfactual outcomes. This approach can detect the bias in the lending model even when only a relatively small sample is available. To demonstrate the value of this approach for the more general task of model interpretability, we also show how the GP model’s estimates can be aggregated to recreate the partial density functions for the lending model.
Keywords: models; Gaussian process; model bias;
JEL Classification: C10; C14; C18; C45;
https://doi.org/10.18651/RWP2023-07
Access Documents
File(s):
File format is application/pdf
https://www.kansascityfed.org/Research%20Working%20Papers/documents/9592/rwp23-07cookpalmer.pdf
Description: Full Text
Authors
Bibliographic Information
Provider: Federal Reserve Bank of Kansas City
Part of Series: Research Working Paper
Publication Date: 2023-06-15
Number: RWP 23-07