Abstract

Two new approaches for mitigating the confirmation bias are proposed and evaluated in the domain of collaborative intelligence analysis. Trainee Naval intelligence analysts and reservists (N=27) role-played analysts working 10 problems, taking one of two different collaborative roles across problems. Each problem's hypothesis had an equal proportion of supporting and refuting evidence. Participants chose and prioritized a subset of available evidence that would be most important in evaluating a particular hypothesis. Bias manifested itself as selecting or prioritizing a skewed subset of supporting evidence to focus on or share. Two new low-workload debiasing methods were evaluated. First, supporting or refuting evidence was laid out graphically, instead of in text. Second, other analysts shared their interpretations of evidence, instead of analysts having to interpret the evidence on their own. Results showed a significant bias to focus on and share confirming evidence across all conditions. The highest ranked evidence was also the most supportive evidence. However, bias was reduced for problems worked with the graphical evaluation format for one collaborative role. Also, participants were less biased to accept hypotheses for problems worked in a graphical format when they had interpreted the evidence-hypothesis relationship themselves. These results have implications for the basic study of decision bias (conflating support and importance) and applied design of systems that help to reduce bias.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call