This article provides a comparative analysis of the regulatory landscapes governing artificial intelligence (AI) in healthcare in the European Union (EU) and South Africa (SA). It critically examines the approaches, frameworks and mechanisms each jurisdiction employs to balance innovation with ethical considerations, patient safety, data privacy and accountability. The EU’s proactive stance, embodied by the AI Act, offers a structured and risk-based categorisation for AI applications, emphasising stringent guidelines for risk management, data governance and human oversight. In contrast, SA’s regulatory environment is characterised by its infancy and lack of specificity, with existing legislation such as the National Health Act and the Medicines and Related Substances Act providing a foundational but limited framework for addressing the unique challenges posed by AI in healthcare. The article delves into the dynamic nature of AI technologies, highlighting the need for continuous risk assessment, the importance of transparent and responsible data governance and the critical role of human oversight in ensuring patient safety and autonomy. It discusses the obligation of clear liability frameworks to address potential malfunctions and security breaches in AI applications. Through this comparative lens, the manuscript identifies regulatory gaps and proposes that the South African Law Reform Commission (SALRC) should play a predominant role in developing draft legislation for AI prior to the evolution of challenges related to these technologies.
Read full abstract