Abstract

The National Institutes of Health in 2018 identified key focus areas for the future of artificial intelligence in medical imaging, creating a foundational roadmap for research in image acquisition, algorithms, data standardization and translatable clinical decision support systems. Among the key issues raised in the report, data availability, the need for novel computing architectures and explainable artificial intelligence algorithms are still relevant, despite the tremendous progress made over the past few years alone. Furthermore, translational goals of data sharing, validation of performance for regulatory approval, generalizability and mitigation of unintended bias must be accounted for early in the development process. In this Perspective, we explore challenges unique to high-dimensional clinical imaging data, in addition to highlighting some of the technical and ethical considerations involved in developing machine learning systems that better represent the high-dimensional nature of many imaging modalities. Furthermore, we argue that methods that attempt to address explainability, uncertainty and bias should be treated as core components of any clinical machine learning system. Substantial advances have been made in the past decade in developing high-performance machine learning models for medical applications, but translating them into practical clinical decision-making processes remains challenging. This Perspective provides insights into a range of challenges specific to high-dimensional, multimodal medical imaging.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call