BackgroundClinical reasoning is an important topic in healthcare training, assessment, and research. Virtual patients (VPs) are a safe environment to teach, assess and perform research on clinical reasoning and diagnostic accuracy. Our aim was to explore the details of the clinical reasoning process and diagnostic accuracy of undergraduate medical students when working with VPs using a concept mapping tool.MethodsOver seven months we provided access to 67 German and 30 English VPs combined with a concept mapping tool to visualize and measure the clinical reasoning process of identifying problems, differential diagnoses, recommended tests and treatment options, and composing a summary statement about a VP. A final diagnosis had to be submitted by the learners in order to conclude the VP scenario. Learners were allowed multiple attempts or could request the correct diagnosis from the system.ResultsWe analyzed 1,393 completed concept maps from 317 learners. We found significant differences between maps with a correct final diagnosis on one or multiple attempts and maps in which learners gave up and requested the solution from the system. These maps had lower scores, fewer summary statements, and fewer problems, differential diagnoses, tests, and treatments.ConclusionsThe different use patterns and scores between learners who had the correct final diagnosis on one or multiple attempts and those who gave up, indicate that diagnostic accuracy in the form of a correct final diagnosis on the first attempt has to be reconsidered as a sole indicator for clinical reasoning competency. For the training, assessment, and research of clinical reasoning we suggest focusing more on the details of the process to reach a correct diagnosis, rather than whether it was made in the first attempt.
Read full abstract