Abstract

Practical reasoning aims at deciding what actions to perform in light of the goals a rational agent possesses. This has been a topic of interest in both philosophy and artificial intelligence, but these two disciplines have produced very different models of practical reasoning. The purpose of this paper is to examine each model in light of the other and produce a unified model adequate for the purposes of both disciplines and superior to the standard models employed by either. The philosophical (decision-theoretic) model directs activity by evaluating acts one at a time in terms of their expected utilities. It is argued that, except in certain special cases, this constitutes an inadequate theory of practical reasoning leading to intuitively incorrect action prescriptions. Acts must be viewed as parts of plans, and plans evaluated as coherent units rather than piecemenal in terms of the acts comprising them. Rationality dictates choosing acts by first choosing the plans prescribing them. Plans, in turn, are compared by looking at their expected values. However, because plans can be embedded in one another, we cannot select plans just by maximizing expected values. Instead, we must employ a more complex criterion here named ‘coextendability’.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.