Abstract

The computational capabilities of AI engines integrated with human knowledge and experience can help create intelligent human-in-the-loop (HITL) decision systems. Such synergistic frameworks improve the decision-making process and make AI-enabled systems more robust, accurate and smarter. In safety-critical applications that require a certain level of human supervision, human and AI engine errors can be costly. However, modeling human behavior in collaborative human-AI decision setup is not straightforward. Humans use cognitive mechanisms and decision heuristics to process information and make decisions under uncertainty. This paper, for the first time, presents a systematic framework for modeling, tracking and adaptation of behavioral biases in a collaborative decision environment within an “Active Learning” context. The proposed framework is validated using experiments conducted on a real-world pancreatic cancer dataset. We consider five different learning scenarios based on different grades of human experts and compare the performance of bias-aware decision models with naive models. It is observed that bias-aware models improve the classification accuracy of decision models by upto 16%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call