Abstract

Achieving fairness in algorithmic decision-making tools is an issue constantly gaining in need and popularity. Today, unfair decisions made by such tools can even be subject to legal consequences. We propose a new constraint that integrates fairness into data envelopment analysis (DEA). This allows the calculation of relative efficiency scores of decision-making units (DMUs) with fairness included. The proposed fairness constraint restricts disparate impact to occur in efficiency scores, and enables the creation of a single data envelopment analysis for both privileged and unprivileged groups of DMUs simultaneously. We show that the proposed method - FairDEA - produces an interpretable model that was tested on a synthetic dataset and two real-world examples, namely the ranking between hybrid and conventional car designs, and the Latin American and Caribbean economies. We provide the interpretation of the FairDEA method by comparing it to the basic DEA and the balanced fairness and efficiency method (BFE DEA). Along with calculating the disparate impact of the model, we performed a Wilcoxon rank-sum test to inspect for fairness in rankings. The results show that the FairDEA method achieves similar efficiency scores as other methods, but without disparate impact. Statistical analysis indicates that the differences in ranking between the groups are not statistically different, which means that the ranking is fair. This method contributes both to the development of data envelopment analysis, and the inclusion of fairness in efficiency analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call