Abstract

Critics of clinical artificial intelligence (AI) suggest that the technology is ethically harmful because it may lead to the dehumanization of the doctor-patient relationship (DPR) by eliminating moral empathy, which is viewed as a distinctively human trait. The benefits of clinical empathy-that is, moral empathy applied in the clinical context-are widely praised, but this praise is often unquestioning and lacks context. In this article, I will argue that criticisms of clinical AI based on appeals to empathy are misplaced. As psychological and philosophical research has shown, empathy leads to certain types of biased reasoning and choices. These biases of empathy consistently impact the DPR. Empathy may lead to partial judgments and asymmetric DPRs, as well as disparities in the treatment of patients, undermining respect for patient autonomy and equality. Engineers should consider the flaws of empathy when designing affective artificial systems in the future. The nature of sympathy and compassion (i.e., displaying emotional concern while maintaining some balanced distance) has been defended by some ethicists as more beneficial than perspective-taking in the clinical context. However, these claims do not seem to have impacted the AI debate. Thus, this articlewill also argue that if machines are programmed for affective behavior, they should also be given some ethical scaffolding.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.