Abstract
Paraphrasing is used as a language activity to train students’ comprehensive language use. Although its positive role has been commonly recognized, there is a lack of research on the construct and rating scale for paraphrase tasks. This study developed a five-level analytic rating scale and applied the scale to score one hundred and forty-three examinees’ paraphrase responses. To validate the scale using an argument-based approach, the scores were analyzed by generalizability analysis and many-facets Rasch analysis, and the responses were coded by construct-related linguistic features. The results show that: (1) The analytic scores explained 35.5% of the variance, and the relative Coef_G met the confidence requirements for a rating scale validation study; (2) The task distinguished examinees’ performance into different levels, the rater severity was consistent, and the rating dimensions were relatively independent with significant differences; (3) Construct components are reflected in the examinees’ responses; (4) The rating scale is appropriate for score reporting and decision making, and has a positive effect on teaching and learning. The study will facilitate language teachers’ assessment of students’ performance and provision of effective feedback on paraphrase tasks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have