Abstract

Review articles play a critical role in informing medical decisions and identifying avenues for future research. With the introduction of artificial intelligence (AI), there has been a growing interest in the potential of this technology to transform the synthesis of medical literature. Open AI's Generative Pre-trained Transformer (GPT-4) (Open AI Inc, San Francisco, CA) tool provides access to advanced AI that is able to quickly produce medical literature following only simple prompts. The accuracy of the generated articles requires review, especially in subspecialty fields like Allergy/Immunology. To critically appraise AI-synthesized allergy-focused minireviews. We tasked the GPT-4 Chatbot with generating 2 1,000-word reviews on the topics of hereditary angioedema and eosinophilic esophagitis. Authors critically appraised these articles using the Joanna Briggs Institute (JBI) tool for text and opinion and additionally evaluated domains of interest such as language, reference quality, and accuracy of the content. The language of the AI-generated minireviews was carefully articulated and logically focused on the topic of interest; however, reviewers of the AI-generated articles indicated that the AI-generated content lacked depth, did not appear to be the result of an analytical process, missed critical information, and contained inaccurate information. Despite being provided instruction to utilize scientific references, the AI chatbot relied mainly on freely available resources, and the AI chatbot fabricated references. The AI holds the potential to change the landscape of synthesizing medical literature; however, apparent inaccurate and fabricated information calls for rigorous evaluation and validation of AI tools in generating medical literature, especially on subjects associated with limited resources.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call