Abstract

PurposeArtificial Intelligence (AI) systems play an increasing role in organisation management, process and product development. This study identifies risks and hazards that AI systems may pose to the work health and safety (WHS) of those engaging with or exposed to them. A conceptual framework of organisational measures for minimising those risks is proposed.Design/methodology/approachAdopting an exploratory, inductive qualitative approach, the researchers interviewed 30 experts in data science, technology and WHS; 12 representatives of nine organisations using or preparing to use AI; and ran online workshops, including with 12 WHS inspectors. The research mapped AI ethics principles endorsed by the Australian government onto the AI Canvas, a tool for tracking AI implementation from ideation via development to operation. Fieldwork and analysis developed a matrix of WHS and organisational–managerial risks and risk minimisation strategies relating to AI use at each implementation stage.FindingsThe study identified psychosocial, work stress and workplace relational risks that organisations and employees face during AI implementation in a workplace. Privacy, business continuity and gaming risks were also noted. All may persist and reoccur during the lifetime of an AI system. Alertness to such risks may be enhanced by adopting a systematic risk assessment approach.Originality/valueA collaborative project involving sociologists, economists and computer scientists, the study relates abstract AI ethics principles to concrete WHS risks and hazards. The study translates principles typically applied at the societal level to workplaces and proposes a process for assessing AI system risks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call