Abstract

ObjectivesThe number of approvals for AI-based systems is increasing rapidly, although AI clinical trial designs lack consideration of the impact of human–AI interaction. Aims of this work were to investigate how reading of an AI system (ChatGPT) features description could influence willingness and expectations for use of this technology as well as dental students’ learning performance. MethodsDental students (N = 104) were asked to learn about side effects of drugs used in dental practice via reading recommended literature or ChatGPT. Expectations towards ChatGPT were measured by survey, before and after reading of a system features description, whilst learning outcomes were evaluated via pharmacology quiz. ResultsStudents who used ChatGPT (YG condition) showed better results on the pharmacology quiz than students who neither read the description nor employed ChatGPT for learning (NN condition). Moreover, students who read the description of ChatGPT features yet did not use it (NG) showed better results on the pharmacology quiz compared with the NN condition, although none of them employed ChatGPT for learning. The NG students compared to the YG students had less trust in AI system assistance in learning, and after the AI system description reading, their expectations changed significantly, showing an association with quiz scores. ConclusionsStudents were reluctant to use ChatGPT, but ChatGPT features description reading could alter expectations and enhance learning performance of students; this suggests AI description–related cognitive bias. The description content should be reviewed and verified prior to AI system use, whilst AI system features description introduction should be considered for all participants in the clinical trial.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call