Abstract

Objective tests are among the most used assessment forms in educational institutions. However, designing multiple-choice tests of good quality is usually very difficult as it requires test designers to implement the testing, analysis and evaluation of question items for adjustment and improvement prior to use. This study presents how to analyze and evaluate multiple-choice questions based on Classical Test Theory. The data used in this study are the exam results performed by regular students majoring in Informatics Teacher education and Computer Science in their four basic Informatics exam papers in Dong Thap University, from the academic year of 2017-2018 to that of 2020-2021. Based on the parameters of the questions entirely calculated by Microsoft Excel software, the authors show how to classify good question items in the exam papers that can be included in the question banks for future use of testing and assessment activities, and at the same time how to identify unsatisfactory questions that should be revised for adjustment, improvement, or elimination.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call