Abstract

Recently, a growing number of credibility assessment technologies (CATs) have been developed to assist human decision-making processes in evidence-based investigations, such as criminal investigations, financial fraud detection, and insurance claim verification. Despite the widespread adoption of CATs, it remains unclear how CAT and human biases interact during the evidence-collection procedure and affect the fairness of investigation outcomes. To address this gap, we develop a Bayesian framework to model CAT adoption and the iterative collection and interpretation of evidence in investigations. Based on the Bayesian framework, we further conduct simulations to examine how CATs affect investigation fairness with various configurations of evidence effectiveness, CAT effectiveness, human biases, technological biases, and decision stakes. We find that when investigators are unconscious of their own biases, CAT adoption generally increases the fairness of investigation outcomes if the CAT is more effective than evidence and less biased than the investigators. However, the CATs' positive influence on fairness diminishes as humans become aware of their own biases. Our results show that CATs' impact on decision fairness highly depends on various technological, human, and contextual factors. We further discuss the implications for CAT development, evaluation, and adoption based on our findings.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.