ÌÇÐÄvlog¹ÙÍø

ÌÇÐÄvlog¹ÙÍø Authors

See citation below for complete author information.

Abstract

In the dominant paradigm for designing equitable machine learning systems, one works to ensure that model predictions satisfy various fairness criteria, such as parity in error rates across race, gender, and other legally protected traits. That approach, however, typically ignores the downstream decisions and outcomes that predictions affect, and, as a result, can induce unexpected harms. Here we present an alternative framework for fairness that directly anticipates the consequences of decisions. Stakeholders first specify preferences over the possible outcomes of an algorithmically informed decision-making process. For example, lenders may prefer extending credit to those most likely to repay a loan, while also preferring similar lending rates across neighborhoods. One then searches the space of decision policies to maximize the specified utility. We develop and describe a method for efficiently learning these optimal policies from data for a large family of expressive utility functions, facilitating a more holistic approach to equitable decision-making.

Citation

Chohlas-Wood, Alex, Madison Coots, Henry Zhu, Emma Brunskill, and Sharad Goel. "Learning to be Fair: A Consequentialist Approach to Equitable Decision-Making." February 2022.