In recent years, the computerised adaptive test (CAT) has gained popularity over conventional exams in evaluating student capabilities with desired accuracy. However, the key limitation of CAT is that it requires a large pool of pre-calibrated questions. In the absence of such a pre-calibrated question bank, offline exams with uncalibrated questions have to be conducted. Many important large exams are offline, for example the Graduated Aptitude Test in Engineering (GATE) and Japanese University Entrance Examination (JUEE). In offline exams, marks are used as the indicator of the students’ capabilities. In this work, our key contribution is to question whether marks obtained are indeed a good measure of students’ capabilities. To this end, we propose an evaluation methodology that mimics the evaluation process of CAT. In our approach, based on the marks scored by students in various questions, we iteratively estimate question parameters such as difficulty, discrimination and the guessing factor as well as student parameters such as capability using the 3-parameter logistic ogive model. Our algorithm uses alternating maximisation to maximise the log likelihood estimate for the questions and students’ parameters given the marks. We compare our approach with marks-based evaluation using simulations. The simulation results show that our approach out performs marks-based evaluation.
Read full abstract