Abstract
BackgroundAlthough score reliability is a sample-dependent characteristic, researchers often only report reliability estimates from previous studies as justification for employing particular questionnaires in their research. The present study followed reliability generalization procedures to determine the mean score reliability of the Eating Disorder Inventory and its most commonly employed subscales (Drive for Thinness, Bulimia, and Body Dissatisfaction) and the Eating Attitudes Test as a way to better identify those characteristics that might impact score reliability.MethodsPublished studies that used these measures were coded based on their reporting of reliability information and additional study characteristics that might influence score reliability.ResultsScore reliability estimates were included in 26.15% of studies using the EDI and 36.28% of studies using the EAT. Mean Cronbach’s alphas for the EDI (total score = .91; subscales = .75 to .89), EAT-40 (total score = .81) and EAT-26 (total score = .86; subscales = .56 to .80) suggested variability in estimated internal consistency. Whereas some EDI subscales exhibited higher score reliability in clinical eating disorder samples than in nonclinical samples, other subscales did not exhibit these differences. Score reliability information for the EAT was primarily reported for nonclinical samples, making it difficult to characterize the effect of type of sample on these measures. However, there was a tendency for mean score reliability to be higher in the adult (vs. adolescent) samples and in female (vs. male) samples.ConclusionsOverall, this study highlights the importance of assessing and reporting internal consistency during every test administration because reliability is affected by characteristics of the participants being examined.
Highlights
Score reliability is a sample-dependent characteristic, researchers often only report reliability estimates from previous studies as justification for employing particular questionnaires in their research
Several measures are available for the assessment of ED symptomatology, but researchers or clinicians may falsely assume that these tools retain adequate psychometric properties such as internal consistency across all circumstances [6]
The researchers reported reliability information for either the total scale score or one or more subscale scores for their sample in 74 (26.15%) studies; 10 of these studies were excluded from the analyses because the authors only reported a range of reliability coefficients for the subscale scores, and 9 studies were excluded for using a different measurement structure
Summary
Score reliability is a sample-dependent characteristic, researchers often only report reliability estimates from previous studies as justification for employing particular questionnaires in their research. The present study followed reliability generalization procedures to determine the mean score reliability of the Eating Disorder Inventory and its most commonly employed subscales (Drive for Thinness, Bulimia, and Body Dissatisfaction) and the Eating Attitudes Test as a way to better identify those characteristics that might impact score reliability. Measurement errors cause observed effects to fluctuate across studies and may lead to underestimation of true effects [10]. This has led to recommendations for correcting effect size estimates for unreliable scores [15]. Because score variability is a property of the data, reliability estimates will not remain constant across studies and should be evaluated and reported as part of the process of describing the data
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.