Abstract
The problem of fair machine learning has drawn much attention over the last few years and the bulk of offered solutions are, in principle, empirical. However, algorithmic fairness also raises important conceptual issues that would fail to be addressed if one relies entirely on empirical considerations. Herein, I will argue that the current debate has developed an empirical framework that has brought important contributions to the development of algorithmic decision-making, such as new techniques to discover and prevent discrimination, additional assessment criteria, and analyses of the interaction between fairness and predictive accuracy. However, the same framework has also suggested higher-order issues regarding the translation of fairness into metrics and quantifiable trade-offs. Although the (empirical) tools which have been developed so far are essential to address discrimination encoded in data and algorithms, their integration into society elicits key (conceptual) questions such as: What kind of assumptions and decisions underlies the empirical framework? How do the results of the empirical approach penetrate public debate? What kind of reflection and deliberation should stakeholders have over available fairness metrics? I will outline the empirical approach to fair machine learning, i.e. how the problem is framed and addressed, and suggest that there are important non-empirical issues that should be tackled. While this work will focus on the problem of algorithmic fairness, the lesson can extend to other conceptual problems in the analysis of algorithmic decision-making such as privacy and explainability.
Highlights
Since scoring and classification algorithms have been introduced to support, if not replace, human decisions in contexts as diverse as healthcare, insurance, employment and criminal justice, the problem of fairness has become a central theme in the field of machine learning
If applied to the problem of algorithmic fairness, this translates into concrete questions such as: “how do fairness metrics relate to the various conceptions of justice?”; “what kind of assumptions do they presuppose?”; “what type of reasoning and decisions do they solicit?”
This virtue counterbalances the effects of laws and, in particular, the deficiencies caused by its mechanical application. It demands the use of judgment in adapting the rules to particular situations where the subject matter is true for the most part.15. This flexibility would be valuable even in the context of fair machine learning, where compromises can vary depending on the domain application - for example, a loss of accuracy could be acceptable in a fraud detection application but it may not be tolerable in a diagnostic tool for cancer
Summary
Since scoring and classification algorithms have been introduced to support, if not replace, human decisions in contexts as diverse as healthcare, insurance, employment and criminal justice, the problem of fairness has become a central theme in the field of machine learning. The problem of fairness is addressed in the context of a policy prediction problem (Kleinberg et al, 2015): a decision about the future of a subject is made and the outcome should not be negatively affected by any sensitive attribute or feature that is considered as irrelevant for that decision. Starting from three claims of the empirical paradigm (see the section “Conceptual problems in fair machine learning”), I will provide some stimuli to critically reflect on specific difficulties that affect the empirical solutions and refer to the conceptual dimensions of the problem. In section “Conceptual problems in fair machine learning” I will introduce a few conceptual difficulties that relate to theoretical assumptions or implicit beliefs of the dominating empirical approach, and conclude with some final remarks
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.