Abstract

Artificial intelligence (AI) for health is a rapidly evolving field, with many potential benefits and risks, but governing bodies are struggling to keep pace. This lag is further fuelled by a lack of comprehensive international guidance on how to ensure AI for health complies with ethical norms and human rights standards. To address this gap, WHO has published guidance on the ethics and governance of AI for health. The guidance presents six consensus principles and a series of recommendations. Three themes stand out: increase patient and public involvement; eliminate gaps in digital literacy, as well as biases that infringe on human autonomy and exacerbate health inequities; and establish frameworks for appropriate governance and accountability. These themes are not new, overlapping with the OECD's AI principles, but it is the first comprehensive guidance from a global body to establish such principles for health. The guidance encourages AI developers and other relevant stakeholders to engage with end users—patients, communities, and the public—at all stages of the AI pathway (from design to deployment) and over proposed changes to how health data will be collected or used. There are examples of where this is already taking place. The UK Health Data Research Alliance, convened by Health Data Research UK (HDRUK), is an independent alliance of large-scale health data custodians with the goal of developing and disseminating best practice for the ethical and responsible use of health datasets. Key to this is interaction with patients and disease advocacy communities to understand their interests and values regarding health data use, and to share the work of the Alliance. HDRUK have also set up a patient and public involvement group to help to inform their COVID-19 strategy. While the WHO guidance encourages post-deployment feedback on health AI, this requires structured processes to ensure feedback is provided in a transparent way. End-user engagement at inception and feedback from real-world use are pivotal to ensure that users' needs are at the heart of AI design, and that safety, efficacy, and functionality are monitored and improved. Open discussions can also help to take the public's temperature on controversial topics in the field such as the trade-off between individual privacy protection and data sharing to benefit science—a particularly pertinent discussion during COVID-19. But such discussions can be hindered by gaps in digital literacy among the public. Furthermore, concerns that biases in data and algorithms will entrench existing health inequities, and that any advancements made will not be equal among those contributing their data—especially those in low-income and middle-income countries (LMICs)—engender mistrust. While the WHO guidance delves into these issues, it largely focuses on addressing digital literacy among the health-care workforce, leaving identification of ways to address biases as a suggested research topic. Improving digital literacy through education and resources is key so patients, communities and the public can make informed, autonomous decisions about their health. Likewise, robust research at the intersection of AI, ethics, and health in LMICs is imperative to drive equitable access to health AI. All of these steps require reforms, not only to support universal health coverage but also to ensure AI tools are adequately assessed and validated for fairness, and developers and technology companies are held to account for errors and biased outputs. But the guidance points out that technology companies setting out (and holding themselves accountable to) their own ethical norms and standards is not the way. Rather, AI should be audited and quality controlled through independent external oversight. A navigation aid developed by Urs Gasser and colleagues for digital tools endorses the need for systemic accountability, not only for compliance but also to identify unanticipated ethical consequences, while a checklist developed by Vollmer and colleagues can help journal editors and peer reviewers to critically evaluate health AI studies. Oversight through open-source communities and citizen science projects could also help to build confidence that AI tools are fit for purpose. Democratising the full health AI pathway through transparent communication is key to cultivating trust in public health strategies, and a participatory approach is an excellent opportunity for historically under-represented voices to be heard. But stringent policies and legal reforms are needed from independent government bodies and regulatory agencies to support equitable deployment at scale, and to provide oversight and accountability for negative consequences.

Highlights

  • Artificial intelligence (AI) for health is a rapidly evolving field, with many potential benefits and risks, but governing bodies are struggling to keep pace. This lag is further fuelled by a lack of comprehensive international guidance on how to ensure AI for health complies with ethical norms and human rights standards

  • The guidance encourages AI developers and other relevant stakeholders to engage with end users— patients, communities, and the public—at all stages of the AI pathway and over proposed changes to how health data will be collected or used

  • Health Data Research UK (HDRUK) have set up a patient and public involvement group to help to inform their COVID-19 strategy

Read more

Summary

Introduction

Artificial intelligence (AI) for health is a rapidly evolving field, with many potential benefits and risks, but governing bodies are struggling to keep pace. WHO has published guidance on the ethics and governance of AI for health. Three themes stand out: increase patient and public involvement; eliminate gaps in digital literacy, as well as biases that infringe on human autonomy and exacerbate health inequities; and establish frameworks for appropriate governance and accountability.

Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.