Breast cancer care is a leading area for development of artificial intelligence (AI), with applications including screening and diagnosis, risk calculation, prognostication and clinical decision-support, management planning, and precision medicine. We review the ethical, legal and social implications of these developments. We consider the values encoded in algorithms, the need to evaluate outcomes, and issues of bias and transferability, data ownership, confidentiality and consent, and legal, moral and professional responsibility. We consider potential effects for patients, including on trust in healthcare, and provide some social science explanations for the apparent rush to implement AI solutions. We conclude by anticipating future directions for AI in breast cancer care. Stakeholders in healthcare AI should acknowledge that their enterprise is an ethical, legal and social challenge, not just a technical challenge. Taking these challenges seriously will require broad engagement, imposition of conditions on implementation, and pre-emptive systems of oversight to ensure that development does not run ahead of evaluation and deliberation. Once artificial intelligence becomes institutionalised, it may be difficult to reverse: a proactive role for government, regulators and professional groups will help ensure introduction in robust research contexts, and the development of a sound evidence base regarding real-world effectiveness. Detailed public discussion is required to consider what kind of AI is acceptable rather than simply accepting what is offered, thus optimising outcomes for health systems, professionals, society and those receiving care.
Read full abstract