Abstract

Ensuring internal validity in quantitative research requires, among other conditions, reliable instrumentation. Unfortunately, however, second language (L2) researchers often fail to report and even more often fail to interpret reliability estimates beyond generic benchmarks for acceptability. As a means to guide interpretations of such estimates, this article meta‐analyzes reliability coefficients (internal consistency, interrater, and intrarater) as reported in published L2 research. We recorded 2,244 reliability estimates in 537 individual articles along with study (e.g., sample size) and instrument features (e.g., item formats) proposed to influence reliability. We also coded for the indices employed (e.g., alpha, KR20). The coefficients were then aggregated (i.e., meta‐analyzed). The three types of reliability varied, with internal consistency as the lowest: median = .82. Interrater and intrarater estimates were substantially higher (.92 and .95, respectively). Overall estimates were also found to vary according to study and instrument features such as proficiency (low = .79, intermediate = .84, advanced = .89) and target skill (e.g., writing = .88 vs. listening = .77). We use our results to inform and encourage interpretations of reliability estimates relative to the larger field as well as to the substantive and methodological features particular to individual studies and subdomains.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.