Abstract

We conducted generalizability studies to examine the extent to which ratings of language arts performance assignments, administered in a large, diverse, urban district to students in second through ninth grades, result in reliable and precise estimates of true student performance. The results highlight three important points when considering the use of performance assessments in large-scale settings: (a) Rater training may significantly impact reliability; (b) simple rater agreement indices do not provide enough information to assess the reliability of inferences about true student achievement; and (c) assessments adequate for relative judgments of student performance do not necessarily provide sufficient precision for absolute criterion-referenced decisions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.