AbstractProblem-solving and higher-order learning are goals of higher education. It has been repeatedly suggested that multiple-choice questions (MCQs) can be used to test higher-order learning, although objective empirical evidence is lacking and MCQs are often criticised for assessing only lower-order, factual, or ‘rote’ learning. These challenges are compounded by a lack of agreement on what constitutes higher order learning: it is normally defined subjectively using heavily criticised frameworks such as such as Bloom’s taxonomy. There is also a lack of agreement on how to write MCQs which assess higher order learning. Here we tested guidance for the creation of MCQs to assess higher-order learning, by evaluating the performance of students who were subject matter novices, vs experts. We found that questions written using the guidance were much harder to answer when students had no prior subject knowledge, whereas lower-order questions could be answered by simply searching online. These findings suggest that questions written using the guidance do indeed test higher-order learning, and such MCQs may be a valid alternative to other written assessment formats designed to test higher-order learning, such as essays, where reliability and cheating are a major concern.
Read full abstract