Abstract

ABSTRACT Effectively evaluating the credibility and accuracy of multiple sources is critical for college readiness. We developed 24 source evaluation tasks spanning four predicted difficulty levels of a hypothesized learning progression (LP) and piloted these tasks to evaluate the utility of an LP-based approach to designing formative literacy assessments. Sixth, seventh, and eighth grade students (N = 360, 120 per grade) completed 12 of the 24 tasks in an online testing session. Analyses examined the tasks’ reliability and validity and whether patterns of performance aligned to predicted LP levels (i.e., recovery of the LP) using task progression maps derived from item response theory (IRT). Results suggested that the LP tasks were reliable and correlated with external measures; however, some lower level tasks proved unexpectedly difficult. Possible explanations for low performance are discussed, followed by implications for future LP and task revisions. This work provides a model for designing and evaluating LP-based literacy assessments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.