Abstract

Interest in measuring and evaluating student learning in higher education is growing. There are many tools available to assess student learning. However, the use of such tools may be more or less appropriate under various conditions. This study provides some evidence related to the appropriate use of pre/post‐tests. The question of whether graded tests elicit a higher level of performance (better representation of actual learning gains) than ungraded post‐tests is examined. We examine whether the difficulty level of the questions asked (knowledge/comprehension vs. analysis/application) affects this difference. We test whether the student’s level in the degree programme affects this difference. Results indicate that post‐tests may not demonstrate the full level of student mastery of learning objectives and that both the difficulty level of the questions asked and the level of students in their degree programme affect the difference between graded and ungraded assessments. Some of these differences may be due to causes other than grades on the assessments. Students may have benefited from the post‐test, as a review of the material, or from additional studying between the post‐test and the final examination. Results also indicate that pre‐tests can be useful in identifying appropriate changes in course materials over time.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.