Abstract
Selecting appropriate texts for L2 (second/foreign language) learners is an important approach to enhancing motivation and, by extension, learning. There is currently no tool for classifying foreign language texts according to a language proficiency framework, which makes it difficult for students and educators to determine the precise difficulty/complexity levels of an unclassified text. Taking the Chinese language as an example, this study aimed to create a readability assessment system, called the Chinese Readability Index Explorer for Chinese as a Foreign Language (CRIE–CFL), in order to level—that is, to sort by proficiency level—texts that will be used for instructional purposes. The framework of choice in this project is the Common European Framework of Reference (CEFR). A team of expert CFL teachers first classified 1,578 CFL texts into their appropriate CEFR levels. A set of 30 CFL readability features was then developed or drawn from previous research, and sorted according to importance using F‐scores. In addition, a support vector machine model was trained by sequentially integrating the features into the model to optimize accuracy. The empirical evaluation of CRIE–CFL revealed average exact‐ and adjacent‐level accuracies of 74.97% and 99.62%, respectively, for predicting the expert classification of a text. The functionalities of CRIE–CFL are introduced and discussed.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.