Abstract

What is involved in source use and how to assess it have been key concerns of research on L2 integrated writing assessment. However, raters’ ability to reliably assess the construct remains scarcely investigated, as do the relations among different types of integrated writing tasks. To partially address this gap, the present study had a sizeable sample (N = 204) of undergraduates from three Hong Kong universities write a summary and an integrated reading-to-write argumentative essay task in a test-like condition. Then, focusing on the criteria of source use, it analysed raters’ application of analytical rubrics in assessing the writing outputs. Rater variability and scale structures were examined through the Multi-Facet Rasch Measurement and compared across the two writing tasks. Both similarities and differences were found. In the summary task, the criteria for source use were applied similarly to the criteria for language use and discourse features. In the essay task, however, the application of the source use criteria was much less consistent. Diagnostic statistics indicate that fewer levels on the scale would be more advisable. For both tasks, the criterion of source language use was found not to fit the overall model nor to align with the criteria for source ideas or language use, indicating that this criterion may represent a trait different from the other. The statistical relations between source use and the other subconstructs of integrated writing tasks are also reported herein. Implications are discussed in the interest of refining the assessment of the source use construct in the future.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call