Abstract

In an achievement test, the examinees with the required knowledge and skill on a test item are expected to answer the item correctly while the examinees with a lack of necessary information on the item are expected to give an incorrect answer. However, an examinee can give a correct answer to the multiple-choice test items through guessing or sometimes give an incorrect response to an easy item due to anxiety or carelessness. Either case may cause a bias estimation of examinee abilities and item parameters. 4PL IRT model and the DINA model can be used to mitigate these negative impacts on the parameter estimations. The current simulation study aims to compare the estimated pseudo-guessing and slipping parameters from the 4PL IRT model and the DINA model under several study conditions. The DINA model was used to simulate the datasets in the study. The study results showed that the bias of the estimated slipping and guessing parameters from both 4PL IRT and DINA models were reasonably small in general although the estimated slipping and guessing parameters were more biased when datasets were analyzed through the 4PL IRT model rather than the DINA model (i.e., the average bias for both guessing and slipping parameters = .00 from DINA model, but .08 from 4PL IRT model). Accordingly, both 4PL IRT and DINA models can be considered for analyzing the datasets contaminated with guessing and slipping effects.

Highlights

  • Psychological and educational tests are usually used for observing a sample of examinees’ behaviors

  • Results were summarized using the average root mean square error (RMSE) of the item parameters and creating its 95% confidence intervals by the 4PL item response theory (IRT) and DINA models across the study conditions

  • Its 95% confidence intervals were so small across all these study conditions that they did not appear in any figure for DINA models

Read more

Summary

Introduction

Psychological and educational tests are usually used for observing a sample of examinees’ behaviors. A correct response is expected from an examinee with the required knowledge on the item whereas an examinee without the necessary knowledge on the item is supposed to give an incorrect answer (Rowley & Traub, 1977). This assumption may not hold for the multiplechoice test items. Under the presence of guessing and slipping effects, the estimation of examinees’ abilities and item parameters might be biased These two effects can be modeled using item response theory (IRT) models and cognitive diagnostic models (CDMs). Junker (2001), used deterministic inputs, noisy, and gate (DINA; Haertel, 1989; Junker & Sijtsma, 2001) models as an initial tool for

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.