Abstract
This research aims to develop a Computer Adaptive Test (CAT) system using the Items Response Theory (IRT) approach. This study is part of developing a web-based system using the Research and Development (R&D) method, employing the Four-D (4-D) model. At its core, this system is similar to a Computer-Based Test (CBT). Still, the critical difference lies in its ability to randomize and provide questions that align with the test-taker's skill levels using the Items Response Theory (IRT) algorithm. The system employs the 3-PL model from the Items Response Theory, considering the difficulty level of questions, the discriminative power of questions, and the likelihood of guessing or interference in the questions. The examination system randomly assigns questions to students based on their responses to previous questions, ensuring that each test-taker receives a unique question sequence. The exam concludes when a test-taker accurately estimates their ability, i.e., SE <= 0.01, or when all questions have been answered. The outcome of this research is a Computer Adaptive Test (CAT) system based on the Items Response Theory (IRT), which can be used to assess students' learning outcomes. This research was implemented in the Multimedia Department of SMK Negeri 1 Gunung Talang, with 90 students as the research sample. The evaluation of the practicality of this system received very high scores, indicating that the Computer Adaptive Test (CAT) system based on the Items Response Theory (IRT) is considered highly practical and effective in achieving the established measurement goals.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have