Abstract

No procedure exists for computing statistical power and sample size requirements for content validity analyses. Such analyses constitute the first step in psychometric validation of clinical outcome assessments (COAs). Power computations for content validity are important for optimizing domain specification and scoring, which ultimately help produce treatment-responsive scores. In this work a new power procedure is developed and validated that can account for the number of items, number of response categories per item, the complexity of the conceptual framework, and a pre-specifiable effect size. Power is derived for model fit statistics commonly used in content validity analyses. This presentation will describe the theoretical power procedure, review results of a simulations study validating the power procedure, and discuss how manipulating items and response categories can allow control of power. The validity of this power procedure was assessed in a simulation study designed to test the agreement between theoretical and empirical power. Factors manipulated in the simulation included the number of items, the effect size, and sample size. For each condition the power to reject a unidimensional model fitted to data generated from a multidimensional structure was assessed. Choosing between such models is a common task in COA development as these models help ascertain whether unidimensional or domain scores are required. The correlation between theoretical and empirical power was greater than or equal to 0.998 in all simulation conditions. This agreement validated the power procedure. A method for determining power and sample size in COA validation studies has been developed. These power computations will help align study sample size with the statistical demands of planned content validity analyses.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call