Abstract

The current discourse on fairness, accountability, and transparency in machine learning is driven by two competing narratives: sociotechnical dogmatism, which holds that society is full of inefficiencies and imperfections that can only be solved by better algorithms; and sociotechnical skepticism, which opposes many instances of automation on principle. Both perspectives, we argue, are reductive and unhelpful. In this chapter, we review a large, diverse body of literature in an attempt to move beyond this restrictive duality, toward a pragmatic synthesis that emphasizes the central role of context and agency in evaluating new and emerging technologies. We show how epistemological and ethical considerations are inextricably intertwined in contemporary debates on algorithmic bias and explainability. We trace the dialectical interplay between dogmatic and skeptical narratives across disciplines, merging insights from social theory and philosophy. We review a number of theories of explanation, ultimately endorsing a sociotechnical pragmatism that combines elements of Floridi’s levelism and Mayo’s reliabilism to place a special emphasis on notions of agency and trust. We conclude that this hybrid does more to promote fairness, accountability, and transparency in machine learning than dogmatic or skeptical alternatives.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.