Abstract

Artificial intelligence (AI) in nuclear medicine has gained significant traction and promises to be a disruptive, but innovative, technology. Recent developments in artificial neural networks, machine learning, and deep learning have ignited debate with respect to ethical and legal challenges associated with the use of AI in healthcare and medicine. While AI in nuclear medicine has the potential to improve workflow and productivity, and enhance clinical and research capabilities, there remains a professional responsibility to the profession and to patients: ethical, social, and legal. Enthusiasm to embrace new technology should not displace responsibilities for the ethical, social, and legal application of technology. This is especially true in relation to data usage, the algorithms applied, and how algorithms are used in practice. Governance of software and algorithms used for detection (segmentation) and/or diagnosis (classification) of disease using medical images requires rigorous evidence-based regulation. A number of frameworks have been developed for ethical application of AI generally in society and in radiology. For nuclear medicine, consideration needs to be given to beneficence, nonmaleficence, fairness and justice, safety, reliability, data security, privacy and confidentiality, mitigation of bias, transparency, explainability, and autonomy. AI is merely a tool, how it is utilised is a human choice. There is potential for AI applications to enhance clinical and research practice in nuclear medicine and concurrently produce deeper, more meaningful interactions between the physicians and the patient. Nonetheless ethical, legal, and social challenges demand careful attention and formulation of standards/guidelines for nuclear medicine.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call