Abstract

AbstractThis paper explores and ultimately affirms the surprising claim that artificial intelligence (AI) can become part of the person, in a robust sense, and examines three ethical and legal implications. The argument is based on a rich, legally inspired conception of persons as free and independent rightholders and objects of heightened protection, but it is construed so broadly that it should also apply to mainstream philosophical conceptions of personhood. The claim is exemplified by a specific technology, devices that connect human brains with computers and operate by AI-algorithms. Under philosophically reasonable and empirically realistic conditions, these devices and the AI running them become parts of the person, in the same way as arms, hearts, or mental capacities are. This transformation shall be called empersonification. It has normative and especially legal consequences because people have broader and stronger duties regarding other persons (and parts of them) than regarding things. Three consequences with practical implications are: (i) AI-devices cease to exist as independent legal entities and come to enjoy the special legal protection of persons; (ii) therefore, third parties such as manufacturers or authors of software lose (intellectual) property rights in device and software; (iii) persons become responsible for the outputs of the empersonified AI-devices to the same degree that they are for desires or intentions arising from the depths of their unconscious. More generally, empersonification marks a new step in the long history of human–machine interaction that deserves critical ethical reflection and calls for a stronger value-aligned development of these technologies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call