Abstract

If artificial agents are to be created such that they occupy space in our social and cultural milieu, then we should expect them to be targets of folk psychological explanation. That is to say, their behavior ought to be explicable in terms of beliefs, desires, obligations, and especially intentions. Herein, we focus on the concept of intentional action, and especially its relationship to consciousness. After outlining some lessons learned from philosophy and psychology that give insight into the structure of intentional action, we find that attention plays a critical role in agency, and indeed, in the production of intentional action. We argue that the insights offered by the literature on agency and intentional action motivate a particular kind of computational cognitive architecture, and one that hasn’t been well-explicated or computationally fleshed out among the community of AI researchers and computational cognitive scientists who work on cognitive systems. To give a sense of what such a system might look like, we present the ARCADIA attention-driven cognitive system as first steps toward an architecture to support the type of agency that rich human–machine interaction will undoubtedly demand.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call