Abstract

An accurate model of the factors that contribute to individual differences in reading ability depends on data collection in large, diverse and representative samples of research participants. However, that is rarely feasible due to the constraints imposed by standardized measures of reading ability which require test administration by trained clinicians or researchers. Here we explore whether a simple, two-alternative forced choice, time limited lexical decision task (LDT), self-delivered through the web-browser, can serve as an accurate and reliable measure of reading ability. We found that performance on the LDT is highly correlated with scores on standardized measures of reading ability such as the Woodcock-Johnson Letter Word Identification test (r = 0.91, disattenuated r = 0.94). Importantly, the LDT reading ability measure is highly reliable (r = 0.97). After optimizing the list of words and pseudowords based on item response theory, we found that a short experiment with 76 trials (2–3 min) provides a reliable (r = 0.95) measure of reading ability. Thus, the self-administered, Rapid Online Assessment of Reading ability (ROAR) developed here overcomes the constraints of resource-intensive, in-person reading assessment, and provides an efficient and automated tool for effective online research into the mechanisms of reading (dis)ability.

Highlights

  • An accurate model of the factors that contribute to individual differences in reading ability depends on data collection in large, diverse and representative samples of research participants

  • Our primary goal was to evaluate the suitability of a browser-based lexical decision task as a measure of reading ability

  • Lexical decision is commonly used to interrogate the mechanisms of word recognition, and previous studies have shown: (a) differences in task performance in dyslexia, (b) changes in task performance over development and (c) relationships to various measures of reading ability

Read more

Summary

Introduction

An accurate model of the factors that contribute to individual differences in reading ability depends on data collection in large, diverse and representative samples of research participants. Standardized measures of reading ability typically require a trained test administrator to administer the test to each research participant These two factors: (a) make it time consuming and costly to recruit and test large samples, (b) prohibit including research subjects that are more than a short drive from a university, (c) bias samples towards university communities and (d) create major barriers for the inclusion of under-represented groups. Even though a researcher might be able to accurately measure processing speed or visual motion perception in thousands of subjects through the web-browser, they would still need to individually administer standardized reading assessments to each participant. Tests like the Woodcock-Johnson are not yet amenable to automated administration through the web-browser because speech recognition algorithms are still imperfect, for children pronouncing decontextualized words and pseudowords

Objectives
Methods
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.