Abstract

Recent advancements in AI-driven chatbots like Chat Generative Pre-Trained Transformer (ChatGPT) and BARD hold immense promise for transforming scientific research, particularly in healthcare. However, their integration into global health initiatives has been slower than anticipated, raising concerns about the accuracy and reliability of health-related information disseminated to the public. This study examines the accuracy of references generated by ChatGPT and BARD in the realm of community health, revealing significant disparities in their performance. While ChatGPT demonstrated some degree of correctness, particularly in certain topics, such as global burden of diseases, references related to health promotion were entirely inaccurate. These findings underscore the challenges in AI-generated references for dynamic health subjects and emphasize the urgent need for improved methods to ensure the trustworthiness of health information. Integration of scientific databases into chatbots, exemplified by initiatives like Scopus AI, presents a promising solution towards achieving accurate and authentic scientific writing in the future.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call