The current uses, potential risks, and practical recommendations for using chat generative pre-trained transformers (ChatGPT) in systematic reviews (SRs) and meta-analyses (MAs) are reviewed in this article. The findings of prior research suggest that, for tasks such as literature screening and information extraction, ChatGPT can match or exceed the performance of human experts. However, for complex tasks such as risk of bias assessment, its performance remains significantly limited, underscoring the critical role of human expertise. The use of ChatGPT as an adjunct tool in SRs and MAs requires careful planning and the implementation of strict quality control and validation mechanisms to mitigate potential errors such as those arising from artificial intelligence (AI) 'hallucinations'. This paper also provides specific recommendations for optimizing human-AI collaboration in SRs and MAs. Assessing the specific context of each task and implementing the most appropriate strategies are critical when using ChatGPT in support of research goals. Furthermore, transparency regarding the use of ChatGPT in research reports is essential to maintaining research integrity. Close attention to ethical norms, including issues of privacy, bias, and fairness, is also imperative. Finally, from a human-centered perspective, this paper emphasizes the importance of researchers cultivating continuous self-iteration, prompt engineering skills, critical thinking, cross-disciplinary collaboration, and ethical awareness skills with the goals of: continuously optimizing human-AI collaboration models within reasonable and compliant norms, enhancing the complex-task performance of AI tools such as ChatGPT, and, ultimately, achieving greater efficiency through technological innovative while upholding scientific rigor.
Read full abstract