Abstract

The Analytic Scale of Argumentative Writing (ASAW) was developed because of the need for a genre-specific scale to assess English as a Second Language (ESL) university student writers’ argumentative essays. The present study reports the findings of field-testing ASAW. For this purpose, argumentative samples (n = 110) were collected and remote-scored by experienced raters (n = 5) who used ASAW. Overall, moderate to high inter-rater reliability (r = 0.7-0.9), as well as high (r = 0.84-0.92) and moderate to high (r = 0.70-0.77) intra-rater reliability coefficients after short (6-week) and long (9-week) rating intervals were obtained, respectively. Some established instruments were used to score the same essays rated using ASAW to test the concurrent validity of the scale. The scores assigned by the raters using the scale demonstrated moderate (r = 0.51) to high (r = 0.77) correlations with the scores awarded using several other standard instruments. The raters who used ASAW were given a questionnaire to evaluate the scale itself, and on average, the results indicated that the raters were highly satisfied with it. It took an average of 5.5 minutes for the raters to evaluate an essay, indicating it was economical. The study has useful implications for refinement of ASAW and development and validation of similar scales and benchmarks in the future.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call