In this AI-focused era, researchers are delving into AI applications in healthcare, with ChatGPT being a primary focus. This Greek study involved 182 doctors from various regions, utilizing a custom web application connected to ChatGPT 4.0. Doctors from diverse departments and experience levels engaged with ChatGPT, which provided tailored responses. Over a month, data was collected using a form with a 1-to-5 rating scale. The results showed varying satisfaction levels across four criteria: clarity, response time, accuracy, and overall satisfaction. ChatGPT's response speed received high ratings (3.85/5.0), whereas clarity of information was moderately rated (3.43/5.0). A significant observation was the correlation between a doctor's experience and their satisfaction with ChatGPT. More experienced doctors (over 21 years) reported lower satisfaction (2.80–3.74/5.0) compared to their less experienced counterparts (3.43–4.20/5.0). At the medical field level, Internal Medicine showed higher satisfaction in evaluation criteria (ranging from 3.56 to 3.88), compared to other fields, while Psychiatry scored higher overall, with ratings from 3.63 to 5.00. The study also compared two departments: Urology and Internal Medicine, with the latter being more satisfied with the accuracy, and clarity of provided information, response time, and overall compared to Urology. These findings illuminate the specific needs of the health sector and highlight both the potential and areas for improvement in ChatGPT's provision of specialized medical information. Despite current limitations, ChatGPT, in its present version, offers a valuable resource to the medical community, signaling further advancements and potential integration into healthcare practices.
Read full abstract