Abstract

ChatGPT and Bard (now called Gemini), two conversational AI models developed by OpenAI and Google AI, respectively, have garnered considerable attention for their ability to engage in natural language conversations and perform various language-related tasks. While the versatility of these chatbots in generating text and simulating human-like conversations is undeniable, we wanted to evaluate their effectiveness in retrieving biological knowledge for curation and research purposes. To do so we asked each chatbot a series of questions and scored their answers based on their quality. Out of a maximal score of 24, ChatGPT scored 5 and Bard scored 13. The encountered issues included missing information, incorrect answers, and instances where responses combine accurate and inaccurate details. Notably, both tools tend to fabricate references to scientific papers, undermining their usability. In light of these findings, we recommend that biologists continue to rely on traditional sources while periodically assessing the reliability of ChatGPT and Bard. As ChatGPT aptly suggested, for specific and up-to-date scientific information, established scientific journals, databases, and subject-matter experts remain the preferred avenues for trustworthy data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.