Abstract

The study reported in the paper starts with a hypothesis that errors observable in writing performances can account for much of the variability of the ratings awarded to them. The assertion is that this may be the case even when prescribed rating criteria explicitly direct rater focus towards successfully performed aspects of a writing performance rather than towards errors. The hypothesis is tested on a sample of texts rated independently of the study, using a five-point analytic rating scale involving ‘Can do’-like descriptors. The correlation between errors and ratings is ascertained using ordinal logistic regression, with Pseudo R2 of 0.51 discerned overall. Thus, with roughly 50% of score variability explainable by error occurrences, the stated hypothesis is considered confirmed. The study goes on to discuss the consequences of the findings and their potential employ in assessment of writing beyond the local assessment context.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call