Abstract

Purpose: This paper seeks to define a risk taxonomy, establish meaningful controls, and create a prospective harms model for AI risks in healthcare. Currently, there is no known comprehensive definition of AI risks, as applied to industry and society. 
 Materials and Methods: The temptation for current research, both in academia and industry, is to apply exclusively-tech-based solutions to these complex problems; however, this view is myopic, and can be remedied by establishing effective controls informed by a holistic approach to managing AI risk. Sociotechnical Systems Theory (STS) is an attractive theoretical lens for this issue, because it prevents collapsing a multifaceted problem into a one-dimensional solution. Specifically, the multidisciplinary approach—one that includes both the sciences and the humanities—reveals a multidimensional view of technology-society interaction, exemplified by the advent of AI. 
 Findings: After advancing this risk taxonomy, this paper utilizes the risk management framework of Lean Six Sigma (LSS) to propose effective mitigating controls for the identified risks. LSS determines controls through data collection and analysis, and supports data-driven decision making for industry professionals.
 Implications to Theory, Practice and Policy: Instantiating the theory of STS into industry practices could be critical, then, for determining and mitigating real-world risks from AI. In summary, this paper combines the academic theory of sociotechnical systems with the industry practice of Lean Six Sigma to develop a hybrid model to fill a gap in the literature. Drawing upon both theory and practice ensures a robust, informed risk model of AI use in healthcare.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call