Abstract

The surging global demand for mental healthcare (MH) services has amplified the interest in utilizing AI-assisted technologies in critical MH components, including assessment and triage. However, while reducing practitioner burden through decision support is a priority in MH-AI integration, the impact of AI systems on practitioner decisions remains under-researched. This study is the first to investigate the interplay between practitioner judgments and AI recommendations in MH diagnostic decision-making. Using a between-subjects vignette design, the study deployed a mock AI system to provide information about patient triage and assessments to a sample of MH professionals and psychology students with a strong understanding of assessments and triage procedures. Findings showed that participants were more inclined to trust and accept AI recommendations when they aligned with their initial diagnoses and professional intuition. Moreover, those claiming higher expertise demonstrated increased skepticism when AI's suggestions deviated from their professional judgment. The study underscores that MH practitioners neither show unwavering trust in, nor complete adherence to AI, but rather exhibit confirmation bias, predominantly favoring suggestions mirroring their pre-existing beliefs. These insights suggest that while practitioners can potentially correct faulty AI recommendations, the utility of implementing debiased AI to counteract practitioner biases warrants additional investigation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call