Abstract

This experimental study explores how source use features impact raters’ judgment of argumentation in a second language (L2) integrated writing test. One hundred four experienced and novice raters were recruited to complete a rating task that simulated the scoring assignment of a local English Placement Test (EPT). Sixty written responses were adapted from essays written by EPT test-takers. These responses were crafted to reflect different conditions of source use features, namely source use quantity and quality. Rater scores were analyzed using the many-facet Rasch model and mixed two-way analyses of variance (ANOVAs) to examine how they are affected by source use features and rater experience. Results show that source use features impacted the argumentation scores assigned by raters. Paragraphs with more source text ideas that are better incorporated received the highest argumentation scores, and vice versa for those with limited, poorly integrated source information. Rater experience impacted scores but did not influence rater performance meaningfully. The findings of this study connect specific source use features with raters’ evaluation of argumentation, helping to further disentangle the relationships among examinee performance, rater decision, and task features of integrated argumentative writing tests. They also provide meaningful implications for writing assessment research and practices.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.