Abstract

Although researchers have argued for a mixed-method approach to rubric design and validation, such research is sparse in the area of L2 integrated writing. This article reports on the validation of an analytic rubric for assessing a classroom-based integrated writing test. Argumentative integrated essays (N = 48) written by EAP students at an English-medium Canadian university were rated by instructors (N = 10) with prior EAP teaching experience. Employing a mixed methods design, the quality of the rubric was established through many facet Rasch measurement and perceptions from the instructors elicited during semi-structured interviews. To further explore the rubric’s ability to differentiate among students, essays from three performance levels (low, average, high) were compared in terms of fluency, syntactic and lexical complexity, cohesion, and lexical diversity measures. Results have suggested the rubric can capture variation in student performance. Implications are discussed in terms of validation of assessment rubrics in localized assessment contexts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call