Abstract Introduction This study assesses the effectiveness of AI tools, ChatGPT and Google Gemini, in educating the public about neurological conditions such as Bell’s palsy, Tetanus, and Headaches, and assess chatbot-generated patient guides based on readability and ease of understanding. Methodology In March 2024, the authors conducted a cross-sectional study to develop patient education brochures for Bell’s palsy, Tetanus vaccination, and Persistent Headache, leveraging AI models ChatGPT and Google Gemini. The brochures’ quality was assessed through readability, similarity, and a modified DISCERN score for reliability. Statistical analysis, performed in R software, compared responses from both AI models using unpaired T-tests. The correlation between ease score and reliability was explored using Pearson’s Coefficient of Correlation. Results The study revealed no significant variances in word count, sentence count, or average words per sentence between the two AI tools. However, materials generated by ChatGPT exhibited significantly higher ease scores, highlighting its proficiency in creating more understandable content (p < 0.05). Conclusions This study finds ChatGPT outperforms Google Gemini in readability despite similar metrics. This suggests ChatGPT's potential superiority in creating more understandable patient education materials. As AI advances, it’s essential to research more tools and medical conditions to ensure they meet diverse patient education needs.
Read full abstract