Abstract

AbstractGiven the popularity of John Rawls’ theory of justice as fairness as an ethical framework in the artificial intelligence (AI) field, this article examines how the theory fits with three different conceptual applications of AI technology. First, the article discusses a proposition by Ashrafian to let an AI agent perform the deliberation that produces a Rawlsian social contract governing humans. The discussion demonstrates the inviability of such an application as it contradicts foundational aspects of Rawls’ theories. An exploration of more viable applications of Rawlsian theory in the AI context follows, introducing the distinction between intrinsic and extrinsic theoretical adherence, i.e., the difference between approaches integrating Rawlsian theory in the system design and those situating AI systems in Rawls-consistent policy/legislative frameworks. The article uses emerging AI legislation in the EU and the U.S. as well as Gabriel’s argument for adopting Rawls’ publicity criterion in the AI field as examples of extrinsic adherence to Rawlsian theory. A discussion of the epistemological challenges of predictive AI systems then illustrates some implications of intrinsic adherence to Rawlsian theory. While AI systems can make short-term predictions about human behavior with intrinsic adherence to Rawls’ theory of justice as fairness, long-term, large-scale predictions results do not adhere to the theory, but instead constitute the type of utilitarianism Rawls vehemently opposed. The article concludes with an overview of the implications of these arguments for policymakers and regulators.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call