Abstract

Abstract Recent studies of proficiency measurement and reporting practices in applied linguists have revealed widespread use of unsatisfactory practices such as the use of proxy measures of proficiency in place of explicit tests. Learner corpus research is one specific area affected by this problem: few learner corpora contain reliable, valid evaluations of text proficiency. This has led to calls for the development of new L2 writing proficiency measures for use in research contexts. Answering this call, a recent study by Paquot et al. (2022) generated assessments of learner corpus texts using a community-driven approach in which judges, recruited from the linguistic community, conducted assessments using comparative judgement. Although the approach generated reliable assessments, its practical use is limited because linguists are not always available to contribute to data collections. This paper, therefore, explores an alternative approach, in which judges are recruited through a crowdsourcing platform. We find that assessments generated in this way can reach near identical levels of reliability and concurrent validity to those produced by members of the linguistic community.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.