In the rapidly evolving world of technology, artificial intelligence (AI) has significantly integrated into various aspects of our lives, including health-care, education, finance, transportation, and entertainment. Notably, AI has also impacted the writing of textual works such as scientific papers, professional opinions, and educational texts. This study investigates the application of OpenAI’s ChatGPT language model in writing scientific articles on telemedicine, specifically in the areas of cardiology, oncology, and remote medical examination. The study uses ChatGPT versions 3.5 and 4 to create articles using three different prompts. The created articles were evaluated based on the reliability of the cited literature references, the impact factor (IF) of the journal in which the sources were published, and the relevance of the sources. The sources were divided into three categories: reliable, semi-reliable, and completely fictitious. The results demonstrate that ChatGPT can produce semantically coherent and error-free texts indistinguishable from human-written texts. However, the reliability of literary references varies significantly. ChatGPT 4, benefitting from its larger training dataset, generates a higher percentage of reliable sources compared to ChatGPT 3.5. The IF analysis indicates the prevalence of high-impact journals among reliable sources, which emphasizes the effectiveness of the model in selecting quality references. The study highlights the need for caution when using AI to write scientific articles due to the potential for biased, unverified, and inaccurate information. It is important to critically evaluate and vet AI-generated content. In addition, the study emphasizes that the correct use of AI and thoughtful drafting of prompts can improve the efficiency and quality of scientific papers. Future advancements in AI technology are expected to further minimize errors and biases.
Read full abstract