Abstract

We study notions of fairness in decision-making systems when individuals have diverse preferences over the possible outcomes of the decisions. Our starting point is the seminal work of Dwork et al. which introduced a notion of individual fairness (IF): given a task-specific similarity metric, every pair of individuals who are similarly qualified according to the metric should receive similar outcomes. We show that when individuals have diverse preferences over outcomes, requiring IF may unintentionally lead to less-preferred outcomes for the very individuals that IF aims to protect. A natural alternative to IF is the classic notion of fair division, envy-freeness (EF): no individual should prefer another individual's outcome over their own. Although EF allows for solutions where all individuals receive a highly-preferred outcome, EF may also be overly-restrictive. For instance, if many individuals agree on the best outcome, then if any individual receives this outcome, they all must receive it, regardless of each individual's underlying qualifications for the outcome. We introduce and study a new notion of preference-informed individual fairness (PIIF) that is a relaxation of both individual fairness and envy-freeness. At a high-level, PIIF requires that outcomes satisfy IF-style constraints, but allows for deviations provided they are in line with individuals' preferences. We show that PIIF can permit outcomes that are more favorable to individuals than any IF solution, while providing considerably more flexibility to the decision-maker than EF. In addition, we show how to efficiently optimize any convex objective over the outcomes subject to PIIF for a rich class of individual preferences. Finally, we demonstrate the broad applicability of the PIIF framework by extending our definitions and algorithms to the multiple-task targeted advertising setting introduced by Dwork and Ilvento.

Highlights

  • Algorithms are used to make consequential decisions about individuals

  • We introduce and study a new notion of preference-informed individual fairness (PIIF) that is a relaxation of both individual fairness and envy-freeness

  • We show that PIIF can permit outcomes that are more favorable to individuals than any IF solution, while providing considerably more flexibility to the decision-maker than EF

Read more

Summary

Introduction

Algorithms are used to make consequential decisions about individuals. Examples range from determining which content users see online to deciding which applicants are considered in lending and hiring decisions. IF rules out the allocation where each candidate gets their most-preferred interview This toy example demonstrates that IF can be overly-restrictive, preventing some solutions where every individual is very happy with their outcome. If a decision-maker is required by IF to give similar members of majority and minority populations similar outcomes, the decision-maker may choose the IF solution that gives everyone the outcome preferred by the majority, running the risk of ignoring the preferences of historically-marginalized groups of individuals Faced with this shortcoming of IF, we consider alternative notions of fairness that may be better suited to handle settings where individuals hold rich preferences over outcomes. We adopt the perspective that given a suitable metric, solutions that are individually fair provide strong protections from discrimination, even though they might not be envy-free. We seek to relax the IF requirements to allow for a richer set of solutions, while still providing meaningful protections against discrimination

This Work
Our Contributions
Preference-Informed Individual Fairness
PIIF relaxes IF and EF
Metric Envy-Freeness
Optimization subject to PIIF
Structured preferences
Efficient optimization subject to PIIF
Fairness in Targeted Advertising
Fairness and social welfare in the multiple-task setting
Further related works
Preference-informed group fairness
Revisiting the assumptions underlying PIIF
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call