Abstract

While the discussion about ethical AI centers around conflicts between automated systems and individual human right, those systems are often adopted to aid institutions rather than individuals. Starting from this observation, this chapter delineates the potential conflicts between institutions and ethical algorithms, with particular focus on two major attempts by the ML community—fair ML and interpretable ML—to make algorithms more responsible. Computer scientists, legal scholars, philosophers, and social scientists have presented both immanent and external critiques regarding the formalization of responsible AI/ML. Such critiques have been based on the computational or mathematical complexity of creating fair, transparent algorithms as well as on the argument that computational solutions cannot accurately account for the entirety of social problems and could potentially worsen them. As an alternative, this chapter suggests an institutional perspective to responsible AI as relevant to considerations of polycentric governance over sociotechnical platforms in the embedding of automated decision systems, where cooperation among users, civic societies, regulatory entities, and related firms is required to secure systems' regularity and integrity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call