Abstract

The U.S. Supreme Court, in a 6-3 decision on June 29, effectively ended the use of race in college admissions [1]. Indeed, national polls found that a plurality of Americans - 42%, according to a poll conducted by the University of Massachusetts [2] - agree that the policy should be discontinued, while 33% support its continued use in admissions decisions. As scholars of fair machine learning, we ponder how the Supreme Court decision shifts points of focus in the field. The most popular fair machine learning methods aim to achieve some form of "impact parity" by diminishing or removing the correlation between decisions and protected attributes, such as race or gender, similarly to the 80% rule of thumb of the Equal Employment Opportunity Commision. Impact parity can be achieved by reversing historical discrimination, which corresponds to affirmative actions, or by diminishing or removing the influence of the attributes correlated with the protected attributes, which is impractical as it severely undermines model accuracy. Besides, impact disparity is not necessarily a bad thing, e.g., African-American patients suffer from a higher rate of chronic illnesses than White patients and, hence, it may be justified to admit them to care programs at a proportionally higher rate [3]. The U.S. burden-shifting framework under Title VII offers solutions alternative to impact parity. To determine employment discrimination, U.S. courts rely on the McDonnell-Douglas burden-shifting framework where the explanations, justifications, and comparisons of employment practices play a central role. Can similar methods be applied in machine learning?

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call