Palliative care aims to improve the quality of life for seriously ill individuals and their caregivers by addressing their holistic care needs through a person- and family-centered approach. While there have been growing efforts to integrate Artificial Intelligence (AI) into palliative care practice and research, it remains unclear whether the use of AI can facilitate the goals of palliative care. In this paper, we present three hypothetical case examples of using AI in the palliative care context, covering machine learning algorithms that predict patient mortality, natural language processing models that detect psychological symptoms, and AI chatbots addressing caregivers’ unmet needs. Using these cases, we examine the ethical dimensions of utilizing AI in palliative care by applying five widely accepted moral principles that guide ethical deliberations in AI: beneficence, nonmaleficence, autonomy, justice, and explicability. We address key ethical questions arising from these five core moral principles and analyze the potential impact the use of AI can have on palliative care stakeholders. Applying a critical lens, we assess whether AI can facilitate the primary aim of palliative care to support seriously ill individuals and their families. We conclude by discussing the gaps that need to be further addressed in order to promote ethical and responsible AI usage in palliative care.
Read full abstract