Abstract

The recent rise of artificial intelligence (AI) systems has led to intense discussions on their ability to achieve higher-level mental states or the ethics of their implementation. One question, which so far has been neglected in the literature, is the question of whether AI systems are capable of action. While the philosophical tradition appeals to intentional mental states, others have argued for a widely inclusive theory of agency. In this paper, I will argue for a gradual concept of agency because both traditional concepts of agency fail to differentiate the agential capacities of AI systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call