Abstract

Human decision-making can be affected by cognitive biases, and outside observers can often detect biased decision-making in others. Accordingly, intelligent agents endowed with the computational equivalent of the human mind should be able to detect biased reasoning and help people to improve their decision-making in practical applications. We are modeling bias-detection functionalities in OntoAgent, a cognitively-inspired agent environment that supports the modeling of intelligent agents with a wide range of sophisticated functionalities, including semantically-oriented language processing, decision-making, learning and collaborating with people. Within OntoAgent, different aspects of agent functionality are described using microtheories that are realized as formal computational models. This paper presents the OntoAgent model that supports the automatic detection of decision-making biases, using clinical medicine as a sample application area. It shows how an intelligent agent serving as a clinician’s assistant can follow the doctor–patient interaction and warn the doctor if it appears that his own or the patient’s decisions might be unwittingly affected by biased reasoning.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.