Abstract

Psychology has become less WEIRD in recent years, marking progress toward becoming a truly global psychology. However, this increase in cultural diversity is not matched by greater attention to cultural biases in research. A significant challenge in culture-comparative research in psychology is that any comparisons are open to possible item bias and non-invariance. Unfortunately, many psychologists are not aware of problems and their implications, and do not know how to best test for invariance in their data. We provide a general introduction to invariance testing and a tutorial of three major classes of techniques that can be easily implemented in the free software and statistical language R. Specifically, we describe (1) confirmatory and multi-group confirmatory factor analysis, with extension to exploratory structural equation modeling, and multi-group alignment; (2) iterative hybrid logistic regression as well as (3) exploratory factor analysis and principal component analysis with Procrustes rotation. We pay specific attention to effect size measures of item biases and differential item function. Code in R is provided in the main text and online (see https://osf.io/agr5e/), and more extended code and a general introduction to R are available in the Supplementary Materials.

Highlights

  • Specialty section: This article was submitted to Cultural Psychology, a section of the journal Frontiers in PsychologyReceived: 30 November 2018 Accepted: 14 June 2019 Published: 18 July 2019Citation: Fischer R and Karl JA (2019) A Primer to (Cross-Cultural) Multi-Group Invariance Testing Possibilities in R.Front

  • We can set up a configural invariance test model by specifying the grouping variable and calling the relevant fit indices: fitmeasures(cfa( model = new_model, data = example, group = “country”, estimator = “ML”), c(“cfi”,“tli”,“rmsea”,“srmr”))

  • Exploratory structural equation modeling is a relatively novel approach which has been used by some cross-cultural researchers already (e.g., Marsh et al, 2009; Vazsonyi et al, 2015)

Read more

Summary

INTRODUCTION

Specialty section: This article was submitted to Cultural Psychology, a section of the journal Frontiers in Psychology. There are no statistical guidelines of how big a change has to be in order to be considered meaningful Theoretical considerations of these modification indices are again important: There might be both meaningful theoretical (conceptual differences in item meaning) or methodological reasons (item bias such as translation issues, culture specificity of item content, etc.) why either factor loadings or intercepts are different across groups. We can set up a configural invariance test model by specifying the grouping variable and calling the relevant fit indices: fitmeasures(cfa( model = new_model, data = example, group = “country”, estimator = “ML”), c(“cfi”,“tli”,“rmsea”,“srmr”)). For testing scalar invariance in which constrain both the loadings and intercepts to be equal, we can call this function: fitmeasures(cfa( model = new_model, data = example, group = “country”, estimator = “ML”, group.equal = c(“loadings”, “intercepts”)), c(“cfi”,“tli”,“rmsea”,“srmr”)). Further restrictions to both loadings and intercepts show that the data fits better using the ESEM approach, even when using more restrictive models

Limitations
Evaluation of Logistic Regression
SUMMARY
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.