IntroductionThe appearance of artificial intelligence language models (AI LMs) in the form of chatbots has gained a lot of popularity worldwide, potentially interfering with different aspects of education, including medical education as well. The present study aims to assess the accuracy and consistency of different AI LMs regarding the histology and embryology knowledge obtained during the 1st year of medical studies. MethodsFive different chatbots (ChatGPT, Bing AI, Bard AI, Perplexity AI, and ChatSonic) were given two sets of multiple-choice questions (MCQs). AI LMs test results were compared to the same test results obtained from 1st year medical students. Chatbots were instructed to use revised Bloom’s taxonomy when classifying questions depending on hierarchical cognitive domains. Simultaneously, two histology teachers independently rated the questions applying the same criteria, followed by the comparison between chatbots’ and teachers’ question classification. The consistency of chatbots’ answers was explored by giving the chatbots the same tests two months apart. ResultsAI LMs successfully and correctly solved MCQs regarding histology and embryology material. All five chatbots showed better results than the 1st year medical students on both histology and embryology tests. Chatbots showed poor results when asked to classify the questions according to revised Bloom’s cognitive taxonomy compared to teachers. There was an inverse correlation between the difficulty of questions and their correct classification by the chatbots. Retesting the chatbots after two months showed a lack of consistency concerning both MCQs answers and question classification according to revised Bloom’s taxonomy learning stage. ConclusionDespite the ability of certain chatbots to provide correct answers to the majority of diverse and heterogeneous questions, a lack of consistency in answers over time warrants their careful use as a medical education tool.
Read full abstract