Abstract

AbstractProblem-solving and higher-order learning are goals of higher education. It has been repeatedly suggested that multiple-choice questions (MCQs) can be used to test higher-order learning, although objective empirical evidence is lacking and MCQs are often criticised for assessing only lower-order, factual, or ‘rote’ learning. These challenges are compounded by a lack of agreement on what constitutes higher order learning: it is normally defined subjectively using heavily criticised frameworks such as such as Bloom’s taxonomy. There is also a lack of agreement on how to write MCQs which assess higher order learning. Here we tested guidance for the creation of MCQs to assess higher-order learning, by evaluating the performance of students who were subject matter novices, vs experts. We found that questions written using the guidance were much harder to answer when students had no prior subject knowledge, whereas lower-order questions could be answered by simply searching online. These findings suggest that questions written using the guidance do indeed test higher-order learning, and such MCQs may be a valid alternative to other written assessment formats designed to test higher-order learning, such as essays, where reliability and cheating are a major concern.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.