Abstract

This chapter illustrates how the question of task equivalence in writing assessment could be empirically examined by combining score and text analysis approaches. A data set, consisting of essays written by EFL (English as a Foreign Language) university students on two argumentative tasks that are assumed to be equivalent in terms of difficulty, is used to illustrate how this approach could help test developers identify the direct and indirect effects of tasks on writing test performance. Students' written responses to the two tasks were coded in terms of various linguistic and discourse features based on the writing competence model (Connor & Mbaye, 2002) and their scores were analyzed using a measurement model, multi-faceted Rasch measurement (Linacre, 2012). This approach can contribute to future studies on task equivalence and on variability in examinee and rater performance in writing assessment settings. Keywords:English as a Foreign Language (EFL) university students; examinee performance; multi-faceted Rasch measurement; rater performance; score analysis approach; text analysis approach; writing assessment; writing competence model; writing task equivalence

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call