Human decision-making can be affected by cognitive biases, and outside observers can often detect biased decision-making in others. Accordingly, intelligent agents endowed with the computational equivalent of the human mind should be able to detect biased reasoning and help people to improve their decision-making in practical applications. We are modeling bias-detection functionalities in OntoAgent, a cognitively-inspired agent environment that supports the modeling of intelligent agents with a wide range of sophisticated functionalities, including semantically-oriented language processing, decision-making, learning and collaborating with people. Within OntoAgent, different aspects of agent functionality are described using microtheories that are realized as formal computational models. This paper presents the OntoAgent model that supports the automatic detection of decision-making biases, using clinical medicine as a sample application area. It shows how an intelligent agent serving as a clinician’s assistant can follow the doctor–patient interaction and warn the doctor if it appears that his own or the patient’s decisions might be unwittingly affected by biased reasoning.
Read full abstract