Abstract

AbstractThe main objective of this Methods Showcase Article is to show how the technique of adaptive comparative judgment, coupled with a crowdsourcing approach, can offer practical solutions to reliability issues as well as to address the time and cost difficulties associated with a text‐based approach to proficiency assessment in L2 research. We showcased this method by reporting on the methodological framework implemented in the Crowdsourcing Language Assessment Project and by presenting the results of a first study that demonstrated that a crowd is able to assess learner texts with high reliability. We found no effect of language skills or language assessment experience on the assessment task, but judges who had received formal language assessment training seemed to differ in their decisions from judges who had not received such training. However, the scores generated by the crowdsourced task exhibited a strong positive correlation with the rubric‐based scores provided with the learner corpus used.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call