Abstract
This analysis examines the adequacy of Ghana's current regulatory framework in addressing AI explainability in clinical decision support systems. It evaluates existing healthcare laws, derives underlying principles, and assesses their application to AI governance. The study finds significant gaps in the current regulatory regime, including a lack of AI-specific provisions, insufficient explainability requirements, and inadequate frameworks for liability and accountability. While existing principles provide a foundation, they are insufficient to fully address the challenges posed by AI in healthcare. The analysis concludes that comprehensive regulatory development is needed, including AI-specific healthcare regulations, technical standards for explainability, and enhanced regulatory capacity. This study is significant as it provides a roadmap for developing effective AI explainability regulations in Ghana's healthcare sector. It highlights the urgent need for regulatory evolution to ensure safe, ethical, and transparent use of AI in clinical decision support, potentially serving as a model for other developing nations facing similar challenges.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have