Abstract

The European Parliament recently proposed to grant the personhood of autonomous AI, which raises fundamental questions concerning the ethical nature of AI. Can they be moral agents? Can they be morally responsible for actions and their consequences? Here we address these questions, focusing upon, inter alia, the possibilities of moral agency and moral responsibility in artificial general intelligence; moral agency is a precondition for moral responsibility (which is, in turn, a precondition for legal punishment). In the first part of the paper we address the moral agency status of AI in light of traditional moral philosophy, especially Kant’s, Hume’s, and Strawson’s, and clarify the possibility of Moral AI (i.e., AI with moral agency) by discussing the Ethical Turing Test, the Moral Chinese Room Argument, and Weak and Strong Moral AI. In the second part we address the moral responsibility status of AI, and thereby clarify the possibility of Responsible AI (i.e., AI with moral responsibility). These issues would be crucial for AI-pervasive technosociety in the (possibly near) future, especially for post-human society after the development of artificial general intelligence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call