Abstract

Bloom's taxonomy is commonly used to assess the cognitive level of exam questions and other course assignments. Our previous research found that ‘Blooming’ pathology exam questions was difficult when using guidelines that were not discipline‐specific. Achieving consistency in scoring required that the observers independently rank and then discuss each exam question. Although this strategy is highly reliable, it is inefficient and may impose a limit on the number of questions that can be evaluated. In addition, the discussion based style of this approach makes it problematic for outside researchers to replicate the results.Building on research by Crowe et al. (2008), we developed a Blooming Anatomy Tool (BAT) that provides tailored guidelines for Blooming anatomy exam questions. To test the efficacy of the BAT, two groups of instructors Bloomed a series of anatomy exam questions, each receiving different scoring criteria. The first group was given a worksheet that broadly outlined Bloom's taxonomy, while the second group received the BAT. After comparing the Bloom levels assigned by individuals in each group to a key generated by the authors, we found that the BAT had a positive impact on accuracy and consistency in determining Bloom categories. We suggest that researchers utilizing Bloom's taxonomy for assessing course materials should consider seeking out or developing a discipline‐specific rubric.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call