Abstract

In the gross anatomy laboratory, assessment is traditionally achieved through free‐response identification questions. Because student learning is strongly driven by assessment, it is natural that the trend of curricular integration should extend into the laboratory setting. Asking case‐based and higher‐order questions on anatomy practical exams illustrates the importance of anatomy knowledge in clinical reasoning and assesses students' ability to apply foundational information from an early point in their medical career. In this study, several years of data from anatomy practical exams were analyzed to assess the impact that these alternative testing modalities have on various measures of student performance. Performance on multiple years of exam items (N>300) were categorized by question type, Bloom's taxonomy, structure type and discipline. Categories were compared with Wilcoxon rank sum tests. Difficulty, discrimination index (DI) and point bi‐serial (PBS) were calculated for each item. Results demonstrated that integrated, multiple‐choice items with structures tagged on cadavers were significantly more difficult for students (p=0.0015) and tended to have a higher mean DI and PBS. Questions categorized to higher Bloom's taxonomic levels were significantly harder than questions from lower on Bloom's taxonomy (p=0.05‐0.0039). Interestingly, identification of vessels was significantly harder than many other structure types (p=0.005‐0.0432). These results support the use of alternative and integrated testing modalities in the anatomy laboratory as an effective way to reinforce important concepts that better fit with the objectives of an integrated curriculum.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call