Abstract

BackgroundPatient-reported outcome (PRO) measures play a key role in the advancement of patient-centered care research. The accuracy of inferences, relevance of predictions, and the true nature of the associations made with PRO data depend on the validity of these measures. Errors inherent to self-report measures can seriously bias the estimation of constructs assessed by the scale. A well-documented disadvantage of self-report measures is their sensitivity to response style (RS) effects such as the respondent’s tendency to select the extremes of a rating scale. Although the biasing effect of extreme responding on constructs measured by self-reported tools has been widely acknowledged and studied across disciplines, little attention has been given to the development and systematic application of methodologies to assess and control for this effect in PRO measures.MethodsWe review the methodological approaches that have been proposed to study extreme RS effects (ERS). We applied a multidimensional item response theory model to simultaneously estimate and correct for the impact of ERS on trait estimation in a PRO instrument. Model estimates were used to study the biasing effects of ERS on sum scores for individuals with the same amount of the targeted trait but different levels of ERS. We evaluated the effect of joint estimation of multiple scales and ERS on trait estimates and demonstrated the biasing effects of ERS on these trait estimates when used as explanatory variables.ResultsA four-dimensional model accounting for ERS bias provided a better fit to the response data. Increasing levels of ERS showed bias in total scores as a function of trait estimates. The effect of ERS was greater when the pattern of extreme responding was the same across multiple scales modeled jointly. The estimated item category intercepts provided evidence of content independent category selection. Uncorrected trait estimates used as explanatory variables in prediction models showed downward bias.ConclusionsA comprehensive evaluation of the psychometric quality and soundness of PRO assessment measures should incorporate the study of ERS as a potential nuisance dimension affecting the accuracy and validity of scores and the impact of PRO data in clinical research and decision making.

Highlights

  • Patient-reported outcome (PRO) measures play a key role in the advancement of patient-centered care research

  • We present a general overview of the most common methods referenced in the literature and investigate the potential effects of extreme response style (ERS) on trait estimates applying a multidimensional methodology to item-level rating scores from a widelyused PRO assessment tool in mental health: the NEO Five-Factor Inventory (NEO-FFI; [38])

  • Model building and analysis results To establish the presence of ERS as a dimension in the response data set, we specified a set of preliminary models with varying constraints

Read more

Summary

Introduction

Patient-reported outcome (PRO) measures play a key role in the advancement of patient-centered care research. PROs are increasingly used in clinical trials as primary or key secondary outcomes to measure a wide range of healthrelated quality of life constructs and their determinants including the patients’ perspective of symptoms and the beneficial effects of drug therapies [1,2,3] Data collected on these self-reported measures provide valuable input for assessing health status, informing clinical decisionmaking, and judging clinical improvement. Other content-irrelevant or nuisance factors, such as personality traits, may systematically influence and distort responses to survey questions This type of measurement bias can seriously affect the estimation of the targeted construct, and the validity of scale scores, and the application of psychometric models that assume invariance of item parameters across respondents and assessment periods. Empirical evidence suggests that extreme response “tendencies” or styles are relatively stable and consistent both over different scales and across time [11,12,13]

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.