An appealing goal in the training of decision makers is the development of formal reasoning skills based on decision analysis. Such techniques are applicable to virtually any problem and are guaranteed to be formally consistent. Behavioral evidence regarding “heuristics and biases,” however, suggests that deviations from such methods are systematic, hard to eliminate, and characteristic of both novices and experts in a number of real-world domains. Would training in formal methods to avoid biases (if it were possible) produce an improvement in the quality of decision making? There are reasons to believe that it would not. “Biases” may in fact be inescapable by-products of the knowledge structures and problem-solving methods that make experienced decision makers effective. Inconsistencies in judgments and choices (1) may simply not matter much in real-world domains; (2) may be corrected in dynamic environments; and (3) may be offset in any case by the advantages of effective use of decision-maker knowledge. Moreover, the identification of biases based on the current experimental literature may itself be mistaken, due to incomplete or otherwise inadequate models. Effective performance in most domains is the result of (a) a reportoire of recognition-based responses, and (b) a set of meta-recognitional strategies that facilitate recognition and validate the results. In training based on these concepts, “errors” are avoided by acquiring and making the most effective use of domain knowledge, and by adapting to feedback, changes in the situation, and shortcomings in one's current approach as they appear.
Read full abstract