Abstract

AbstractTask difficulty is an important but complex phenomenon in Applied Linguistics, for which there is relatively little empirical research. This article discusses approaches to defining task difficulty and focuses on objective task difficulty derived from ratings of performances and on difficulty derived from an error count in the performances, thus taking errors as indicators of writing task difficulty. Errors are described in terms of the Scope–Substance error taxonomy in writing performances from the Slovene General Matura examination in English. The most frequent errors are located at word and phrase level. Generally, error frequency decreases as writing proficiency increases, but some error types do not conform to this trend. This is the case for punctuation errors, which gain prominence at higher levels of mastery. The results of this study are relevant for assessment, particularly for rating scale development or revision, and for rater training. They are equally relevant for teaching, since knowing sources of difficulty in tasks is a prerequisite for effective pedagogical action. More generally, if applied to performances based on a wider range of tasks, viewing errors as indicators of difficulty can lead to a better understanding of difficulty‐generating task features.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call