Abstract

The use of artificial intelligence (AI) and machine learning (ML) in clinical care offers great promise to improve patient health outcomes and reduce health inequity across patient populations. However, inherent biases in these applications, and the subsequent potential risk of harm can limit current use. Multi-modal workflows designed to minimize these limitations in the development, implementation, and evaluation of ML systems in real-world settings are needed to improve efficacy while reducing bias and the risk of potential harms. Comprehensive consideration of rapidly evolving AI technologies and the inherent risks of bias, the expanding volume and nature of data sources, and the evolving regulatory landscapes, can contribute meaningfully to the development of AI-enhanced clinical decision making and the reduction in health inequity.

Highlights

  • machine learning (ML) algorithms are themselves subject to biases that flow (a) from the inherent heuristics employed in the act of programming, and (b) in unsupervised learning and reinforcement learning ML models, the biases that are exacerbated by the flaws inherent in the data

  • As public health consultants and clinical researchers we remind ourselves of the importance of “reflective practice”, and perhaps encourage the need for all to pause and reflect on sources of bias when thinking of designing or implementing clinical tools relying on artificial intelligence (AI) and ML

  • Clinical data of immediate interest, data on factors indicative of the social determinants of health and health inequity; Explore the possibility of using natural language processing (NLP) to expand the breadth of data in ML models, given the potential for unstructured data in the EHR case notes to reveal drivers of positive/adverse health outcomes

Read more

Summary

Introduction

The growth of artificial intelligence (AI) in clinical practice is driven in part by commercial interests, with AI-focused companies speeding to introduce machine learning (ML) empowered technologies and decision tools for clinical practice; and the recognition by clinicians and regulators that AI holds promise to ease clinicians’ burden while reducing health inequity. Commercial entities are having success with the Food and Drug Administration (FDA) granting marketing approvals for Software as a Medical Device (SaMD), and the Center for Medicare and Medicaid Services (CMS) recently awarded a prize to reward “explainable artificial intelligence solutions to help front-line clinicians understand and trust AI-driven data feedback” [3]. This growth in AI implementation will be enhanced by the National Institutes of Health’s (NIH) Digital Health Equity, Training and Research. Using problematic data for models will amplify the gaps.” [4]

Heuristics Can Shape Biases in Clinical Data and ML Predictions
Algorithmic Biases—The Ghost in the Machine?
Ophthalmology Imaging—A Case in Point
Success in the Lab May Not Translate to the Real-World Clinic
What Is in The Black Box?
Hearing the Patient and Providers Voices—Leverage Unstructured Case Note Data
Optimization of Clinical Trials with AI
Regulatory Considerations
FDA and the US Experience
Conclusions and Recommendations
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call