Abstract

Item test analysis is an aid to identify items that need to be eliminated from an assessment. An automatic elimination procedure based on item statistics, therefore, could help to increase the quality of a test in an objective manner. This was investigated by studying the effect of a standardized elimination procedure on the test results of a second-year course over a period of 6 successive years in 1,624 candidates. Cohort effects on the item elimination were examined by determining the number of additional items that had to be eliminated from three different tests in 3 successive academic years in two cohorts. The items that were part of more than one test and had to be eliminated according to the procedure in at least one of the tests appeared to have to be retained according to the same procedure in most of the other tests. The procedure harmed the high scoring students relatively more often than the other students, and the number of eliminated items appeared to be cohort dependent. As a consequence, automatic elimination procedures obscure the transparency of the grading process unacceptably and transform valid tests into inadequate samples of the course content.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.