Recent developments in artificial intelligence (AI) have led to changes in healthcare. Government and regulatory bodies have advocated the need for transparency in AI systems with recommendations to provide users with more details about AI accuracy and how AI systems work. However, increased transparency could lead to negative outcomes if humans become overreliant on the technology. This study investigated how changes in AI transparency affected human decision-making in a medical-screening visual search task. Transparency was manipulated by either giving or withholding knowledge about the accuracy of an 'AI system'. We tested performance in seven simulated lab mammography tasks, in which observers searched for a cancer which could be correctly or incorrectly flagged by computer-aided detection (CAD) 'AI prompts'. Across tasks, the CAD systems varied in accuracy. In the 'transparent' condition, participants were told the accuracy of the CAD system, in the 'not transparent' condition, they were not. The results showed that increasing CAD transparency impaired task performance, producing an increase in false alarms, decreased sensitivity, an increase in recall rate, and a decrease in positive predictive value. Along with increasing investment in AI, this research shows that it is important to investigate how transparency of AI systems affect human decision-making. Increased transparency may lead to overtrust in AI systems, which can impact clinical outcomes.
Read full abstract