Abstract
Item Response Theory (IRT) is utilised to detect bias in assessment tools and address issues such as faked or manipulated responses, enhancing the reliability and stability of conclusions in personality assessment. This article examines the item parameter estimates of a scale and the effectiveness of one-, two-, and three-parameter logistic models in analysing response stability in personality measurement from repeated administration. Three hundred undergraduate students at three tertiary institutions in Nigeria were sampled using a multi-stage sampling procedure. Data was collected using an adapted version of the Big Five Inventory (BFI) with a reliability coefficient of 0.85. The results showed that the item parameter estimates (mean threshold) are within the recommended benchmarks. A comparison of the three IRT models based on the Likelihood ratio (InL), Akaike Information Criterion (AIC), and Bayesian Information Criterion (BIC) values revealed that the two-parameter logistic model best fit the personality data among undergraduates from repeated administration. It is recommended that, rather than relying solely on a statistical decision-making process, IRT fit and model comparison should be applied to gain insight into the functioning of items and tests.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.