Abstract

Trengove and colleagues have made a valuable contribution to a debate that currently dominates the agenda of AI regulators from Washington to Brussels: what methods should be chosen to assess the risk levels for the use of a certain type of AI in a specific use case, and how should such risk profiles impact the rules to govern the use of AI? While the need for a nuanced approach to policy making is undisputed, different pathways for balancing the relative impact of type of AI, field of use, ethical principles, outcomes, and resulting benefits are being considered. Some of the hallmarks of smart AI regulation are agility, adaptability, and clarity. Expanding Trengove et al.'s suggestion to weigh risk-against benefit-profiles I advocate for focussing on the outcomes of the use of AI and not on the technology itself. This approach empowers developers, providers, users, and regulators of AI technology to engage in meaningful factual discussions about real immediate risks rather than perceived hypothetical risks and to prevent risk assessments that miss the mark either by overseeing risks or by misrepresenting risks as a result of having to shoehorn individual use cases into prescriptive and rigid AI technology classifications. Of course, a coherent logical line needs to be drawn from the capabilities of AI to the results of using it. Prophetic doomsday projections of AI-inflicted human extinction calling for AI embargos and moratoria fail to do this, stir unfounded hype bordering hysteria, and distract from the pressing task at hand which is to responsibly leverage the potential and mitigate the risks of using (generative) AI today. Trengove et al.'s pragmatic perspective sets a much needed counterpoint to such AI-pocalyptic musings by describing the relative importance of transparency, accountability, and safety in different scenarios, and by rightly concluding that all ethical principles are not always equally important in all cases. In my paper I illustrated this by highlighting that it is possible to use agnostic learning on high-dimensional data to produce clinically useful conclusions without being able to fully explain how the underlying AI system reached them. Such insights can for example be useful leads for clinicians in search of new treatment options. Whilst a conception might be unexplainable it could assist in supporting clinically useful outcomes. If the outcome of using AI in such a situation could be lifesaving, and the risk of not using an AI enabled outcome life-threatening, the risk of not being able to fully evidence the AI logic might be acceptable (although understanding how exactly the use of AI aided a positive outcome would of course be a very desirable add-on). Building further on Trengove et al.'s argument, smart regulation also carries a high level of adaptability: new AI technologies and applications and the capabilities and performance of existing AI systems evolve at lightening-speed without generating the evidence base for risk mitigation at the same pace. Only regulation that focuses on outcomes rather than technology will be able to keep up and adapt to changing conditions quickly and efficiently. However, it needs to be pointed out that the nature of an AI tool cannot be entirely separated from the intent of using it. Every AI algorithm inherently—before any deployment, solely through the way the model is trained and tested—can contain features that could cause harm when applying it, regardless of, and sometimes even against the intent of use.1Blackman R. Ethical machines. Harvard Business Review Press, 2022Google Scholar Therefore, smart AI regulation, while primarily focussing on risk-based use-case and outcomes assessments will also consider the type of AI algorithm a priori and to an extent that covers the risks related to building it in the first place. Following Trengove et al.'s proposal that ethical frameworks be implemented through risk-based regulation, in what is probably the most advanced example of AI legislation at the point of writing, the EU AI Act aims to strike the balance between regulating technology and intended use cases. Progressing to be the world's first geopolitical bloc-wide legislation governing the use of AI, it is expected to become the de facto global standard for AI regulation.2Chan K. European Union set to be trailblazer in global rush to regulate artificial intelligence. TIME Magazine, 2023https://time.com/6278144/european-union-artificial-intelligence-regulation/Google Scholar It had initially been designed following a light-touch, risk-based and use-case specific approach but the meteoric spread and adoption of generative AI over the last few months and particularly the quasi use-case agnostic adaptability of Foundation Models have led to the 11th-hour introduction of rigid regulatory elements in the form of Annex III which casts selected types of AI, including generative AI, into a priori risk levels. This prescriptive, generalised categorisation has been criticized as being overly rigid and continues to be fine-tuned.3Gerlach N. The case of the EU AI Act: why we need to return to a risk-based approach. IAPP, 2023https://iapp.org/news/a/the-case-of-the-eu-ai-act-why-we-need-to-return-to-a-risk-based-approach/Google Scholar Policymakers in the US are following closely and have begun to explore risk-based approaches to AI regulation as well. In a prominent example, the US Senate Judiciary Subcommittee on Privacy, Technology, and the Law hearing on AI which took place on May 16, 2023 convened some of the most influential voices in generative AI and saw IBM's Chief Privacy and Trust Officer Christina Montgomery advocate for a ‘Precision Regulation’ concept for “establishing rules to govern the deployment of AI in specific use-cases, not regulating the technology itself”.4Montgomery C. Testimony before the U.S. Senate judiciary committee subcommittee on privacy, technology, and the Law hearing on: oversight of AI: rules for artificial intelligence. US Senate, 2023https://www.judiciary.senate.gov/imo/media/doc/2023-05-16%20-%20Testimony%20-%20Montgomery.pdfGoogle Scholar Thinktanks such as the Coalition for Health AI and regulatory bodies such as the OECD have proposed guidelines for assessing risks and benefits of using AI in certain use case scenarios.5Blueprint for trustworthy AI implementation guidance and assurance for healthcare. Coalition for Health AI, 2023https://www.coalitionforhealthai.org/papers/blueprint-for-trustworthy-ai_V1.0.pdfGoogle Scholar,6OECD framework for the classification of AI systems. OECD, 2022https://www.oecd.org/publications/oecd-framework-for-the-classification-of-ai-systems-cb6d9eca-en.htmGoogle Scholar Applying such AI risk- and benefit assessment frameworks to specific fields of application and calibrating them for individual use cases is an ongoing and evolving effort. As such, I urge caution with declaring certain fields of application as permanently risk-free or low-risk. Based on the same rationale underlying the explainability scenario described earlier in this article, Trengove and colleagues are correct in that the use of generative AI for early-stage drug discovery and design carries different safety risks than for example the use of generative AI for clinical diagnosis and treatment support. They also rightly describe existing regulatory frameworks for safe drug development as well positioned to mitigate downstream safety risks correlated with using AI for early-stage drug discovery. But the early phases of drug development still need to undergo regular thorough AI risk-assessments as the FDA's May 2023 consultation paper on best AI and machine learning practices for drug development attests to 7Using artificial intelligence and machine learning in the development of drugs and biological products. Discussion paper and request for feedback. FDA, 2023https://www.fda.gov/media/167973/downloadGoogle Scholar. Smart regulation for (generative) AI in health and medicine is and—owing to the nature of technological progress of AI—will remain a work in progress. This neither means that AI regulation will be inherently toothless nor that it will be inefficient. Regulators walk a thin red line between stalling AI innovation and progress by being overly prescriptive on one hand and allowing uncontrolled AI exploration in the spirit of ‘move fast and break things’ by falling behind technological evolution and adoption on the other hand. This constitutes a moving goal post problem that cannot be solved for good but must be managed by applying foresight, being up to date on technical developments, and carefully balancing stable regulatory standards with updates and adaptions of rules and policies. I hope that all stakeholders continue to collaborate on implementing ethical principles for the responsible use of AI following a risk-based, outcomes-focused approach that stimulates innovation, protects those who use AI, and ensures sustainable value and benefits are delivered to all. SH is responsible for all parts of the letter. The author is an employee of Digital Health Cooperative Research Centre and an inventor on granted US Patent 11,250,219 B2 “Cognitive Natural Language Generation with Style Model”. The author received no funding for this letter. The author thanks Christine Bennett and Isobel Frean for providing valuable insights and guidance. Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicineLarge Language Models (LLMs) are a key component of generative artificial intelligence (AI) applications for creating new content including text, imagery, audio, code, and videos in response to textual instructions. Without human oversight, guidance and responsible design and operation, such generative AI applications will remain a party trick with substantial potential for creating and spreading misinformation or harmful and inaccurate content at unprecedented scale. However, if positioned and developed responsibly as companions to humans augmenting but not replacing their role in decision making, knowledge retrieval and other cognitive processes, they could evolve into highly efficient, trustworthy, assistive tools for information management. Full-Text PDF Open AccessResponse to “Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine”Although Stefan Harrer's commendable contribution to the Lancet opens an important debate about the role of Large Language Models (“LLMs”) in healthcare and medicine,1 we believe that Harrer's argument would benefit from more nuanced distinctions between ethical properties of different LLM use cases that are critical to matching risk management to risk profiles and potential benefits of LLMs. Full-Text PDF Open Access

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call