Abstract

Artificial intelligence (AI) has great potential to improve health care quality, safety, efficiency, and access. However, the widespread adoption of health care AI needs to catch up to other sectors. Challenges, including data limitations, misaligned incentives, and organizational obstacles, have hindered implementation. Strategic demonstrations, partnerships, aligned incentives, and continued investment are needed to enable responsible adoption of AI. High reliability health care organizations offer insights into safely implementing major initiatives through frameworks like the Patient Safety Adoption Framework, which provides practical guidance on leadership, culture, process, measurement, and person-centeredness to successfully adopt safety practices. High reliability health care organizations ensure consistently safe and high quality care through a culture focused on reliability, accountability, and learning from errors and near misses. The Veterans Health Administration applied a high reliability health care model to instill safety principles and improve outcomes. As the use of AI becomes more widespread, ensuring its ethical development is crucial to avoiding new risks and harm. The US Department of Veterans Affairs National AI Institute proposed a Trustworthy AI Framework tailored for federal health care with 6 principles: purposeful, effective and safe, secure and private, fair and equitable, transparent and explainable, and accountable and monitored. This aims to manage risks and build trust. Combining these AI principles with high reliability safety principles can enable successful, trustworthy AI that improves health care quality, safety, efficiency, and access. Overcoming AI adoption barriers will require strategic efforts, partnerships, and investment to implement AI responsibly, safely, and equitably based on the health care context.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call