Abstract

Researchers rely on psychometric principles when trying to gain understanding of unobservable psychological phenomena disconfounded from the methods used. Psychometric models provide us with tools to support this endeavour, but they are agnostic to the meaning researchers intend to attribute to the data. We define method effects as resulting from actions which weaken the psychometric structure of measurement, and argue that solution to this confounding will ultimately rest on testing whether data collected fit a psychometric model based on a substantive theory, rather than a search for a model that best fits the data. We highlight the importance of taking the notions of fundamental measurement seriously by reviewing distinctions between the Rasch measurement model and more generalised 2PL and 3PL IRT models. We then present two lines of research that highlight considerations of making method effects explicit in experimental designs. First, we contrast the use of experimental manipulations to study measurement reactivity during the assessment of metacognitive processes with factor-analytic research of the same. The former suggests differential performance-facilitating and -inhibiting reactivity as a function of other individual differences, whereas factor-analytic research suggests a ubiquitous monotonically predictive confidence factor. Second, we evaluate differential effects of context and source on within-individual variability indices of personality derived from multiple observations, highlighting again the importance of a structured and theoretically grounded observational framework. We conclude by arguing that substantive variables can act as method effects and should be considered at the time of design rather than after the fact, and without compromising measurement ideals.

Highlights

  • We have observed that there is a belief among some that highly sophisticated statistical techniques will be able to correct for fundamental problems in the interpretability of assessment data

  • Our core objective was to highlight that any solution to managing method effects, whether it be by factor analysis, item response theory, or mixed-effects multi-level models, will rest on testing whether data collected fits the expected underlying quantitative nature of the attribute

  • Our argument is that while on the one hand “method factors” as systematic sources of individual differences can certainly be a threat to validity and must be controlled for appropriately, that control should be planned prior to data collection rather than crafted to suit the data once collected

Read more

Summary

INTRODUCTION

We have observed that there is a belief among some that highly sophisticated statistical techniques will be able to correct for fundamental problems in the interpretability of assessment data. We begin this review by first reminding ourselves of the importance of measurement models (Kellen et al, 2021; Stemler and Naples, 2021) and highlighting the distinction between the Rasch measurement model and the general IRT approach, both of which are inherently latent variable measurement models We do this in an attempt to demonstrate the slippery psychometric slope one can find oneself on when the balance between data and theory is misaligned or too heavily informed by pragmatics, such as attempting to control for the potential impact of extraneous factors. What we summarise here is far from new, but it does remind us of the importance of theory, even when it comes to considering the treatment of method effects

Method Effects and Quantifiable Structure
CONCLUSION
A Final Comment on Method Effects
ETHICS STATEMENT
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.