Abstract

AbstractRecent research papers and tests in real life point in the direction that machines in the future may develop some form of possibly rudimentary inner life. Philosophers have warned and emphasized that the possibility of artificial suffering or the possibility of machines as moral patients should not be ruled out. In this paper, I reflect on the consequences for moral development of striving for AGI. In the introduction, I present examples which point into the direction of the future possibility of artificial suffering and highlight the increasing similarity between, for example, machine–human and human–human interaction. Next, I present and discuss responses to the possibility of artificial suffering supporting a cautious attitude for the sake of the machines. From a virtue ethical perspective and the development of human virtues, I subsequently argue that humans should not pursue the path of developing and creating AGI, not merely for the sake of possible suffering in machines, but also due to machine–human interaction becoming more alike to human–human interaction and for the sake of the human’s own moral development. Thus, for several reasons, humanity, as a whole, should be extremely cautious about pursuing the path of developing AGI—Artificial General Intelligence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call