Abstract

The computational capabilities of AI engines integrated with human knowledge and experience can help create intelligent human-in-the-loop (HITL) decision systems. Such synergistic frameworks improve the decision-making process and make AI-enabled systems more robust, accurate and smarter. In safety-critical applications that require a certain level of human supervision, human and AI engine errors can be costly. However, modeling human behavior in collaborative human-AI decision setup is not straightforward. Humans use cognitive mechanisms and decision heuristics to process information and make decisions under uncertainty. This paper, for the first time, presents a systematic framework for modeling, tracking and adaptation of behavioral biases in a collaborative decision environment within an “Active Learning” context. The proposed framework is validated using experiments conducted on a real-world pancreatic cancer dataset. We consider five different learning scenarios based on different grades of human experts and compare the performance of bias-aware decision models with naive models. It is observed that bias-aware models improve the classification accuracy of decision models by upto 16%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.