Abstract

Existing health disparities in the United States are partially driven by the way healthcare is delivered. There is interest in using Artificial Intelligence (AI)-driven software as medical devices (SaMD) to aid in healthcare delivery and reduce health disparities. However, AI-driven tools have the potential to codify bias in healthcare settings. Some AI-driven SaMDs have displayed substandard performance among racial and ethnic minorities. Auditing these tools for biased output can help produce more equitable outcomes across populations. However, there are currently no explicit Food and Drug Administration (FDA) regulations that examine bias in AI software in healthcare. Therefore, we propose the FDA develop a distinct regulatory process for AI-driven SaMDs that includes assessing equitable output across populations and avoiding potential health disparity exacerbation. This change could help prevent AI-driven health disparities nationwide.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call