Abstract

ChatGPT is a large language model developed by OpenAI designed to generate human-like responses to prompts. This study aims to evaluate the ability of GPT-4 to generate scientific content and assist in scientific writing using medical vitamin B12 as the topic. Furthermore, the study will compare the performance of GPT-4 to its predecessor, GPT-3.5. The study examined responses from GPT-4 and GPT-3.5 to vitamin B12-related prompts, focusing on their quality and characteristics and comparing them to established scientific literature. The results indicated that GPT-4 can potentially streamline scientific writing through its ability to edit language and write abstracts, keywords, and abbreviation lists. However, significant limitations of ChatGPT were revealed, including its inability to identify and address bias, inability to include recent information, lack of transparency, and inclusion of inaccurate information. Additionally, it cannot check for plagiarism or provide proper references. The accuracy of GPT-4's answers was found to be superior to GPT-3.5. ChatGPT can be considered a helpful assistant in the writing process but not a replacement for a scientist's expertise. Researchers must remain aware of its limitations and use it appropriately. The improvements in consecutive ChatGPT versions suggest the possibility of overcoming some present limitations in the near future.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call