Abstract

Rationality is a useful metaphor for understanding autonomous, intelligent agents. A persuasive view of intelligent agents uses cognitive primitives such as intentions and beliefs to describe, explain, and specify their behavior. These primitives are often associated with a notion of commitment that is internal to the given agent. However, at first sight, there is a tension between commitments and rationality. We show how the two concepts can be reconciled for the important and interesting case of limited, intelligent agents. We show how our approach extends to handle more subtle issues such as precommitments, which have previously been assumed to be conceptually too complex. We close with a proposal to develop conative policies as a means to represent commitments in a generic, declarative manner.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call