Abstract

encouraged give a lower score to an item about self-confidence than those from a country where more self-assured people are well-respected, even when their underground trait levels of selfconfidence are the same. In more psychometric terminology, this phenomenon—namely, unequal responding patterns among groups—is called differential item functioning (DIF), which is a profound bias threatening survey-based research [3]. If DIF is in doubt, we naturally question whether a difference in survey scores between two groups stems from the real difference in the trait that we want to measure or DIF between the groups, at least to a certain degree [4]. Thus, it is essential to ensure equivalence in the responding pattern for survey items among groups before moving forward to any group-to-group comparison of survey scores and more sophisticated analysis. Unfortunately, however, more often than not, this step is omitted in survey-based studies [5]. In this series of Safety Attitudes Questionnaire–Korean Version (SAQ-K) articles, we have intentionally postponed the discussion on DIF [6-9] because we planned to utilize item response theory (IRT) for DIF detection. Using IRT is known to be a superior method given its conditional invariance property, which enables better decisions on DIF than traditional sum scores of a questionnaire [10]. We waited for the successful application

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call