Abstract
As costs decline and technology inevitably improves, current trends suggest that artificial intelligence (AI) and a variety of "carebots" will increasingly be adopted in medical care. Medical ethicists have long expressed concerns that such technologies remove the human element from medicine, resulting in dehumanization and depersonalized care. However, we argue that where shame presents a barrier to medical care, it is sometimes ethically permissible and even desirable to deploy AI/carebots because (i) dehumanization in medicine is not always morally wrong, and (ii) dehumanization can sometimes better promote and protect important medical values. Shame is often a consequence of the human-to-human element of medical care and can prevent patients from seeking treatment and from disclosing important information to their healthcare provider. Conditions and treatments that are shame-inducing offer opportunities for introducing AI/carebots in a manner that removes the human element of medicine but does so ethically. We outline numerous examples of shame-inducing interactions and how they are overcome by implementing existing and expected developments of AI/carebot technology that remove the human element from care.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.