Abstract

We study the problem of making decisions under partial ignorance, or partially quantified uncertainty. This problem arises in many applications in robotics and AI, and it has not yet got the attention it deserves. The traditional decision rules of decision under risk and under strict uncertainty (or complete ignorance) can naturally be extended to the more general case of decision under partial ignorance. We propose partial probability theory (PPT) for representing partial ignorance, and we discuss the extension to PPT of expected utility maximization. We argue that decision analysis should not be exclusively focused on optimizing but pay more serious attention to finding satisfactory actions, and to reasoning with assumptions. The extended minimax regret decision rule appears to be an important rule for satisficing.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call