Abstract

ABSTRACT Healthcare professionals can leverage Artificial intelligence (AI) to provide better care for their patients. However, it is also necessary to consider that AI algorithms operate according to historical diagnostic data, which often include evidence gathered from men. The biases of prior practices and the perpetuation of exclusionary processes toward women can lead to inaccurate medical decisions. The ramifications of such errors show that the incorrect use of AI raises several critical questions regarding who should be responsible for potential incidents. This study aims to provide an analysis of the role of AI in affecting women’s healthcare and an overview of the liability implications caused by AI mistakes. Finally, this work presents a framework for algorithmic auditing to ensure that AI data are collected and stored according to secure, legal, and fair practices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call