Abstract

Abstract Automated tools for syntactic complexity measurement are increasingly used for analyzing various kinds of second language corpora, even though these tools were originally developed and tested for texts produced by advanced learners. This study investigates the reliability of automated complexity measurement for beginner and lower-intermediate L2 English data by comparing manual and automated analyses of a corpus of 80 texts written by Dutch-speaking learners. Our quantitative and qualitative analyses reveal that the reliability of automated complexity measurement is substantially affected by learner errors, parser errors, and Tregex pattern undergeneration. We also demonstrate the importance of aligning the definitions of analytical units between the computational tool and human annotators. In order to enhance the reliability of automated analyses, it is recommended that certain modifications are made to the system, and non-advanced L2 English data are preprocessed prior to automated analyses.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call