Aligning linguistic complexity with the difficulty of English texts for L2 learners based on CEFR levels

  • Abstract
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Abstract Selecting appropriate texts for second language (L2) learners is essential for effective education. However, current text difficulty models often inadequately classify materials for L2 learners by proficiency levels. This study addresses this deficiency by employing the Common European Framework of Reference for Languages (CEFR) as its foundational framework. A cohort of expert English-L2 educators classified 1,181 texts from the CommonLit Ease of Readability corpus into CEFR levels. A random forest model was then trained using 24 linguistic complexity features to predict the CEFR levels of English texts for L2 learners. The model achieved 62.6% exact-level accuracy across the six granular CEFR levels and 82.6% across the three overarching levels, outperforming a baseline model based on three existing readability formulas. Additionally, it identified shared and unique linguistic features across different CEFR levels, highlighting the necessity to adjust text classification models to accommodate the distinct linguistic profiles of low- and high-proficiency readers.

ReferencesShowing 10 of 66 papers
  • Cite Count Icon 397
  • 10.1111/j.1540-4781.2011.01232_1.x
The Relationship of Lexical Richness to the Quality of ESL Learners’ Oral Narratives
  • Jun 1, 2012
  • The Modern Language Journal
  • Xiaofei Lu

  • Cite Count Icon 67
  • 10.1111/lang.12353
More Than Frequency? Exploring Predictors of Word Difficulty for Second Language Learners
  • May 16, 2019
  • Language Learning
  • Brett J Hashimoto + 1 more

  • Open Access Icon
  • Cite Count Icon 43
  • 10.1016/j.jksuci.2022.03.012
A novel improved random forest for text classification using feature ranking and optimal number of trees
  • Mar 31, 2022
  • Journal of King Saud University - Computer and Information Sciences
  • Nasir Jalal + 3 more

  • Cite Count Icon 174
  • 10.1016/j.csl.2008.04.003
A machine learning approach to reading level assessment
  • May 7, 2008
  • Computer Speech & Language
  • Sarah E Petersen + 1 more

  • Open Access Icon
  • Cite Count Icon 27
  • 10.1111/j.1540-4781.2007.00627_9.x
Challenges and Opportunities of the CEFR for Reimagining Foreign Language Pedagogy
  • Nov 6, 2007
  • The Modern Language Journal
  • Gerard Westhoff

  • Cite Count Icon 2406
  • 10.2307/3587951
A New Academic Word List
  • Jan 1, 2000
  • TESOL Quarterly
  • Averil Coxhead

  • Cite Count Icon 219
  • 10.1016/j.plrev.2017.03.002
Dependency distance: A new perspective on syntactic patterns in natural languages
  • Mar 27, 2017
  • Physics of Life Reviews
  • Haitao Liu + 2 more

  • Cite Count Icon 573
  • 10.1017/cbo9780511894664
Automated Evaluation of Text and Discourse with Coh-Metrix
  • Mar 24, 2014
  • Danielle S Mcnamara + 3 more

  • Open Access Icon
  • Cite Count Icon 811
  • 10.1177/1536867x20909688
The random forest algorithm for statistical learning
  • Mar 1, 2020
  • The Stata Journal: Promoting communications on statistics and Stata
  • Matthias Schonlau + 1 more

  • Cite Count Icon 78
  • 10.1177/0267658310395851
Psycholinguistic word information in second language oral discourse
  • Apr 13, 2011
  • Second Language Research
  • Tom Salsbury + 2 more

Similar Papers
  • Research Article
  • Cite Count Icon 1
  • 10.1093/applin/amad054
Linguistic Features Distinguishing Students’ Writing Ability Aligned with CEFR Levels
  • Sep 21, 2023
  • Applied Linguistics
  • Hong Ma + 2 more

A substantive body of research has been revolving around the linguistic features that distinguish different levels of students’ writing samples (e.g. Crossley and McNamara 2012; McNamara et al. 2015; Lu 2017). Nevertheless, it is somewhat difficult to generalize the findings across various empirical studies, given that different criteria were adopted to measure language learners’ proficiency levels (Chen and Baker 2016). Some researchers suggested using the Common European Framework of Reference for Languages (CEFR) (Council of Europe 2001) as the common standard of evaluating and describing students’ proficiency levels. Therefore, the current research intends to identify the linguistic features that distinguish students’ writing samples across CEFR levels by adopting a machine-learning method, decision tree, which provides the direct visualization of decisions made in each step of the classification procedure. The linguistic features that emerged as predicative of CEFR levels could be employed to (i) inform L2 writing instruction, (ii) track long-term development of writing ability, and (iii) facilitate experts’ judgment in the practice of aligning writing tests/samples with CEFR.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 2
  • 10.3390/languages9070239
Mastery of Listening and Reading Vocabulary Levels in Relation to CEFR: Insights into Student Admissions and English as a Medium of Instruction
  • Jul 2, 2024
  • Languages
  • Zhiqing Li + 3 more

Prior to enrolling in an English as a medium of instruction (EMI) institution, students must show an English proficiency level through meeting a benchmark on a standard English proficiency test, which is typically aligned with the Common European Framework of Reference for Languages (CEFR). Along with overall English proficiency, aural/written vocabulary level mastery could also predict students’ success at EMI institutions, as students need adequate English vocabulary knowledge to comprehend lectures and course readings. However, aural/written vocabulary level mastery has yet to be clearly benchmarked to CEFR levels. Therefore, this study aimed to investigate the correlations between students’ aural/written vocabulary level mastery and their CEFR levels. Forty undergraduate students in a Macau EMI university were recruited to take one English proficiency test and two vocabulary level tests (i.e., Listening Vocabulary Levels Test (LVLT) and the Updated Vocabulary Levels Test (UVLT)). Correlation analyses were conducted to explore the relationship between students’ CEFR levels and their mastery of listening and reading vocabulary levels. A positive correlation was found between students’ CEFR levels and their mastery of receptive aural vocabulary levels (ρ = 0.409, p = 0.009). Furthermore, a statistically significant positive correlation was found between students’ CEFR levels and their mastery of receptive written vocabulary levels (ρ = 0.559, p < 0.001). Although positive correlations were observed, no clear pattern was identified regarding the relationship between students’ CEFR levels and their mastery of aural/written vocabulary levels. Regression analyses were further conducted to determine the extent to which the combination of receptive aural and written vocabulary knowledge predicts the CEFR levels. The results indicated that the regression model that included only UVLT scores better predicted the CEFR levels. Given the positive correlations observed between students’ CEFR levels and their mastery of vocabulary levels, this study’s findings suggest the inclusion of aural/written vocabulary levels as additional indicators for ensuring student academic success in EMI institutions. Implications for EMI universities on student admissions, classroom teaching, and provision of additional English courses were provided.

  • Research Article
  • Cite Count Icon 31
  • 10.1075/eurosla.14.01gyl
Linguistic correlates to communicative proficiency levels of the CEFR
  • Aug 5, 2014
  • EUROSLA Yearbook
  • Henrik Gyllstad + 3 more

This study is a contribution to the empirical underpinning of the Common European Framework of Reference for Languages (CEFR), and it aims to identify linguistic correlates to the proficiency levels defined by the CEFR. The study was conducted in a Swedish school setting, focusing on English, French and Italian, and examined the relationship between CEFR levels (A1–C2) assigned by experienced raters to learners’ written texts and three measures of syntactic complexity (based on length of t-unit, subclause ratio, and mean length of clause (cf. Norris & Ortega, 2009)). Data were elicited through two written tasks (a short letter and a narrative) completed by pupils of L2 English (N = 54) in years four, nine and the final year of upper-secondary school, L3 French (N = 38) in year nine and the final year of upper-secondary school, and L4 Italian (N = 28) in the final year of upper-secondary school and first year of university. The results showed that, globally, there were weak to medium-strong correlations between assigned CEFR levels and the three measures of syntactic complexity in English, French and Italian. Furthermore, it was found that syntactic complexity was homogeneous across the three languages at CEFR level A, whereas syntactic complexity was different across languages at CEFR level B, especially in the data for English and French. Consequences for the empirical validity of the CEFR framework and the nature of the three measures of complexity are discussed.

  • Conference Article
  • Cite Count Icon 3
  • 10.21125/iceri.2018.2114
CONTEXTUALISING THE CEFR: THE UNIVERSITI MALAYSIA PAHANG ENGLISH LANGUAGE PROFICIENCY WRITING TEST
  • Nov 1, 2018
  • Zarina Mohd Ali + 5 more

The Common European Framework of Reference for Languages (CEFR) has made a significant impact on language testing worldwide, particularly on the English language proficiency test (EPT). Many countries including the United States, Canada and Australia have begun aligning their language assessments with CEFR. Although it has been applied worldwide, CEFR arguably lacks connection with stakeholders, socioeducational contexts as well as empirical validation. Much of the criticism revolves around a neglect of non-European contexts. Studies which concentrated on both directly and indirectly CEFR-aligned English language tests within the contexts of Asian countries such as Japan and Taiwan have been published. In Malaysia, its English Language Roadmap 2015-2025 takes into consideration aspects of teaching, learning and assessment of the English language based on the six CEFR levels. Therefore, higher education institutions will need to respond to this roadmap by aligning their English language tests to these standards. Nevertheless, to date, no similar studies have been documented within the context of Malaysia. Hence, this paper seeks to fill the gap in the literature by presenting the early stage works on the contextualisation of the comprehensive yet non-exhaustive CEFR to suit the needs and demands of Universiti Malaysia Pahang English Language Proficiency Test stakeholders. The present study has given emphasis on writing because it is usually considered the most important at tertiary level education. Through writing, language functions not only as the transmitter of knowledge or enabler for communication but also as a mediating tool for the thinking process that is manifested in written form. Aligning locally-developed EPT to the CEFR standards will not only provide evidence of test takers’ level of proficiency but the alignment will also give the local EPT a value-added advantage in terms of the marketability of the CEFR-aligned EPT.

  • Research Article
  • 10.1515/iral-2024-0235
First-language use in English interlanguage: a multi-CEFR-level spoken learner corpus analysis of Taiwanese learners
  • Jul 25, 2025
  • International Review of Applied Linguistics in Language Teaching
  • Lan-Fen Huang

This study explores the phenomenon of Chinese-English code-switching in interviews with 116 Taiwanese learners across proficiency levels within the Common European Framework of Reference (CEFR; Council of Europe. 2020. Common European Framework of Reference for Languages: Learning, teaching, assessment companion volume. Strasbourg: Council of Europe Publishing). The learner corpus data were extracted from the Taiwanese sub-corpus (Huang, Lan-fen. 2014. Constructing the Taiwanese component of the Louvain International Database of Spoken English Interlanguage (LINDSEI). Taiwan Journal of TESOL 11(1). 31–74) of the Louvain International Database of Spoken English Interlanguage (LINDSEI; Gilquin, Gaëtanelle, de Cock Sylvie & Sylviane Granger (eds.). 2010. LINDSEI Louvain International Database of Spoken English Interlanguage . Handbook and CD-ROM. Louvain-la-Neuve: Presses universitaires de Louvain) and its expanded data (Huang, Lan-fen & Tomáš Gráf. 2021. Expanding LINDSEI to spoken learner English from several L1s across CEFR levels. Corpora 16(2). 271–285). The analysis of relative code-switching frequencies revealed a consistent decline from A1 to C1, indicating that as proficiency increased, L1 use declined. Kruskal-Wallis tests presented clear evidence of a difference between higher (B2 and above) and lower (B1 and below) levels. The functions of Chinese use were explored and interpreted on the basis of empirical evidence within their immediate context, following the taxonomy of linguistic functions proposed by Kaneko, Tomoko. 2009. Use of mother tongue in English-as-a-foreign-language speech by Japanese university students. Gakuen 822. 25–41. By analysing a CEFR-rated Taiwanese learner corpus, this study provides practical insights showing how L1 use evolves with proficiency. It also proposes alternatives: English expressions and appropriate communication strategies to address referential code-switching, which probably stems from learners’ limited proficiency. These findings yield practical suggestions and linguistic examples to support English language teachers working with Chinese-speaking learnerss.

  • Research Article
  • 10.61508/refl.v31i2.275057
Aligning Academic Reading Tests to the Common European Framework of Reference for Languages (CEFR)
  • Aug 19, 2024
  • rEFLections
  • Sivakorn Tangsakul + 1 more

Given the significant global influence of the Common European Framework of Reference for Languages: Teaching, Learning, and Assessment (CEFR) on English language education, this study deals with aligning a university’s academic reading tests to the CEFR. It aimed at validating the test construct of the academic reading tests in relation to the proficiency levels defined by the CEFR. The study employs two standard setting procedures outlined in the CEFR Manual: the Familiarization procedure and the Specification procedure, to explore the CEFR level of the academic reading tests as well as the prominent characteristics of the reading texts and the test items in terms of their level and key features. Three academic reading tests were randomly selected. The CEFR Content Analysis Grid for Reading was employed to characterize the content of test items and test tasks. The results indicated that 9 out of 18 reading texts were estimated to correspond to the B2 CEFR level. Texts estimated as B1 and C1 levels were evenly distributed, and none of the reading texts were classified as A1 or C2 levels. Moreover, the findings demonstrated a significant prevalence of B2 and B1 level items. Specifically, B2 items represented the largest proportion at 31.88%, closely followed by B1 items at 25.12%.

  • Research Article
  • Cite Count Icon 1
  • 10.58379/qghs6327
Determining aspects of text difficulty for the Sign Language of the Netherlands (NGT) Functional Assessment instrument
  • Jan 1, 2014
  • Studies in Language Assessment
  • Annieck Van Den Broek-Laven + 4 more

In this paper we describe our work in progress on the development of a set of criteria to predict text difficulty in Sign Language of the Netherlands (NGT). These texts are used in a four year bachelor program, which is being brought in line with the Common European Framework of Reference for Languages (Council of Europe, 2001). Production and interaction proficiency are assessed through the NGT Functional Assessment instrument, adapted from the Sign Language Proficiency Interview (Caccamise & Samar, 2009). With this test we were able to determine that after one year of NGT-study students produce NGT at CEFR-level A2, after two years they sign at level B1, and after four years they are proficient in NGT on CEFR-level B2. As a result of that we were able to identify NGT texts that were matched to the level of students at certain stages in their studies with a CEFR-level. These texts were then analysed for sign familiarity, morpheme-sign rate, use of space and use of non-manual signals. All of these elements appear to be relevant for the determination of a good alignment between the difficulty of NGT signed texts and the targeted CEFR level, although only the morpheme-sign rate appears to be a decisive indicator.

  • PDF Download Icon
  • Research Article
  • 10.22158/selt.v7n1p1
Developmental Stages and the CEFR Levels in Foreign Language Learners’ Speaking and Writing
  • Dec 17, 2018
  • Studies in English Language Teaching
  • Yumiko Yamaguchi

<p class="AbstractTitle"><em>This paper aims to investigate foreign language learners’ speaking and writing based on a second language acquisition (SLA) theory and the Common European Framework of Reference for Languages (CEFR; Council of Europe, 2001). While the CEFR has been widely used as a reference instrument in foreign language </em><em>education</em><em>, there has been insufficient empirical research undertaken on the CEFR levels (e.g., Hulstijin, 2007; Wisniewski, 2017). Also, few studies have examined how the CEFR levels relate to the developmental stages predicted in SLA theories. In this study, spoken and written narratives performed by 60 Japanese learners of English are examined based on one of the major SLA theories, namely Processability Theory (PT; Pienemann, 1998, 2005; Bettoni & Di Biase, 2015), as well as on the CEFR. Results show that the Japanese L1 learners acquire English syntax as predicted in PT in both speaking and writing. In addition, there seems to be a linear correlation between the CEFR levels and PT stages. However, it is also found that the learners at the highest PT stage are not necessarily at a higher CEFR level.</em><em></em></p>

  • Book Chapter
  • Cite Count Icon 44
  • 10.1057/9780230242258_12
Vocabulary Size and the Common European Framework of Reference for Languages
  • Jan 1, 2009
  • James Milton + 1 more

In its earliest stages of development the Common European Framework of Reference for Languages (CEFR) included vocabulary lists in its materials and these gave some indication of the scale of the vocabulary knowledge that the creators were envisaging at the various levels of the framework. More recently these have been removed and learners, textbooks and course syllabuses are placed into the framework levels according to skills-based rather than knowledge-based criteria (Council of Europe, 2003). The purpose of this chapter is to see what happens when vocabulary size measures are placed back into the framework and there are two reasons for wanting to do this. One is academic interest in seeing what vocabulary sizes emerge at the CEFR levels and considering how these compare across levels and across languages. The second reason is a practical one and is to help to make the framework more robust. The skills-based criteria have the virtue of making the framework flexible and highly inclusive, and almost any course, textbook or learner should be able to find a place in the system. However, the penalty for such flexibility is that the levels become imprecise; it is often possible to place learners or textbooks at several of the CEFR levels. This potentially devalues the framework and diminishes its usefulness. The British foreign language exam system in schools, for example, has been criticized for being misplaced within the system and, as a consequence, for misleading those who try to use it (Milton, 2007a).

  • Research Article
  • 10.14456/asj-psu.2018.52
An Evaluation of Alignment between French Language National Test and the Common European Framework of Reference for Languages Using Item Mapping
  • Nov 29, 2018
  • Preud Bairaman + 2 more

Standards-based education has become a crucial issue with which evaluating the learning of French language in Thailand was then required to comply. Since the alignment is the core idea in systematic and standards-based reform, this study aimed to evaluate the alignment between Thailand’s Professional and Academic Aptitude Test in French language (PAT 7.1) and the Common European Framework of Reference for Languages by using item mapping and to study the factors affecting the misalignment between Thailand’s Professional and Academic Aptitude Test in French language (PAT 7.1) and the Common European Framework of Reference for Languages. Variables were composed of contents and cognitive demands. Populations were 100 items of PAT 7.1, 388 grade-12 students (Matthayomsuksa 6) in English-French program (academic year 2017) in 10 schools under the South 2’s Development Center of French language (Le Centre pour le Developpement du Francais, CDF Hatyai) and 3 qualified panelists. A sample of 163 students were selected using the multistage sampling method. Research tools consisted of a PAT 7.1 test, two alignment matrixes and a set of open-ended questions. Data was analyzed using the proportion, Rasch IRT model, chi-square, percentage and content analysis. The research revealed that there were 16 items of PAT 7.1 aligning with the Common European Framework of Reference for Languages. Most of them were classified into A2 level and evaluated analysis/investigate skill. Moreover, item mapping result was coherent with that of panelists’ judgement. Three main factors affecting the misalignment were highlighted: the incoherence between item content and standard content, the unmatched levels of cognitive demands and the language usage, including the grammar, in the items that was irrelevant to standard and to students’ language proficiency levels. Keywords: French language, CEFR, alignment, item mapping

  • Conference Article
  • 10.5167/uzh-200087
Using Multilingual Resources to Evaluate CEFRLex for Learner Applications
  • Jun 2, 2020
  • Johannes Graën + 2 more

The Common European Framework of Reference for Languages (CEFR) defines six levels of learner proficiency, and links them to particular communicative abilities. The CEFRLex project aims at compiling lexical resources that link single words and multi-word expressions to particular CEFR levels. The resources are thought to reflect second language learner needs as they are compiled from CEFR-graded textbooks and other learner-directed texts. In this work, we investigate the applicability of CEFRLex resources for building language learning applications. Our main concerns were that vocabulary in language learning materials might be sparse, i.e. that not all vocabulary items that belong to a particular level would also occur in materials for that level, and, on the other hand, that vocabulary items might be used on lower-level materials if required by the topic (e.g. with a simpler paraphrasing or translation). Our results indicate that the English CEFRLex resource is in accordance with external resources that we jointly employ as gold standard. Together with other values obtained from monolingual and parallel corpora, we can indicate which entries need to be adjusted to obtain values that are even more in line with this gold standard. We expect that this finding also holds for the other languages.

  • Research Article
  • Cite Count Icon 1
  • 10.26378/rnlael1020260
La complejidad lingüística en los niveles de competencia del MCER: el caso de la variedad verbal en la expresión escrita en ELE
  • Mar 28, 2016
  • Nuria De La Torre García

This study is a contribution to the empirical description of the proficiency levels defined by the Common European Framework of Reference for Languages (CEFR) and certified by the multilevel exam of Spanish for academic purposes of Nebrija University. The data consisted of 124 texts written in response to two tasks of the writing paper of an experimental form of the above mentioned exam. Expert raters using two descriptor scales, one holistic and one focusing on grammatical range and accuracy, classified the texts in the A-C1 CEFR levels. The study examined the relationship between the assigned CEFR levels and measures of variety of verb forms. The results showed that there were weak correlations (p<0,05) between assigned CEFR levels as assessed by the raters using the two descriptor scales measures of syntactic variety. Furthermore, it was found that some of the analyzed linguistic features could distinguish across the CEFR levels (p<0,05).

  • Research Article
  • 10.37546/jaltsig.cefr7-5
Application of the CEFR to an Arabic Corpus: A Case Study
  • Mar 25, 2025
  • CEFR Journal - Research and Practice
  • Aziza Zaher

"The Common European Framework of Reference for Languages: Learning, Teaching, Assessment (CEFR) was developed by the Council of Europe and first published in 2001. It has since evolved significantly and new volumes have been published; most recently, the CEFR Companion Volume (CEFR/CV) in 2020. The CEFR aims to provide the basis for L2 learning, teaching, and assessment of European languages. However, it has been widely used around the world in non-European contexts. This article presents a case study of the application of the CEFR to an Arabic corpus comprising 214 texts produced by first year students at Zayed University in the UAE, which is part of a bilingual corpus in Arabic and English. This article focuses on the application of the CEFR to the Arabic texts which posed specific challenges, including Arabic diglossia whereby there are two distinct varieties of the language used for writing and speaking. Furthermore, the complexities of Arabic grammar include that it has formal features which only appear in writing. There is also some overlap between Arabic and other languages, particularly English, as many English expressions are used in everyday life in Arab societies. These factors, among others, lead to unique issues to consider when applying the CEFR to a written Arabic corpus. However, due to the generic nature of the CEFR descriptors, they have been applied successfully to the assessment of the Arabic written corpus, which provides the basis for further applications of the CEFR to other competencies in Arabic and to other non-European languages. This article describes the process of rating the corpus, outlines the practical implications of the application of the CEFR to an Arabic written corpus and presents an overview of student performance mapped across the six CEFR levels"

  • Research Article
  • Cite Count Icon 1
  • 10.23971/jefl.v11i2.2863
Incorporating CEFR bands and ICT-competences in grammar syllabuses of English Language Education Study Program in Indonesia
  • Sep 8, 2021
  • Journal on English as a Foreign Language
  • Siti Drivoka Sulistyaningrum + 1 more

The incorporation of Information and Communication Technology (ICT) into the educational field has been widely implemented as 21st century skill. Common European Framework of Reference for Languages (CEFR) is one of the global standard languages required for global standard. However, in Indonesian context, there is lack of syllabuses incorporating CEFR bands and ICT-competences. This study explores the CEFR levels and ICT-competences incorporating in grammar syllabuses of ELESP in Indonesian universities. A content analysis method is used. Fifteen syllabuses of the grammar of the ELESP from 8 universities in Indonesia were selected purposely based on the proportion of private and public universities. All the grammar syllabuses are identified as Basic, Intermediate, and Advanced grammar. The findings revealed that: basic grammar, the CEFR level was A1-B1, intermediate grammar, the CEFR level was A1-B2, while the advanced grammar showed that CEFR level was B2-C2. In addition, the ICT-competences in entire syllabuses showed insufficient utilization which dominated Knowledge Acquisition and less intended for Knowledge Deepening and Knowledge Creation. These findings contribute as a reference to adjust and re-align the existing syllabuses to be in line with the CEFR bands framework and enriched with ICT-competencies.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 3
  • 10.5539/elt.v13n1p124
A Comparative Study of Test Takers’ Performance on Computer-Based Test and Paper-Based Test Across Different CEFR Levels
  • Dec 20, 2019
  • English Language Teaching
  • Don Yao

Computer-based test (CBT) and paper-based test (PBT) are two test modes to the test takers that have been widely adopted in the field of language testing or assessment over the last few decades. Due to the rapid development of science and technology, it is a trend for universities and educational institutions striving rather hard to deliver the test on a computer. Therefore, research on the comparison between these two test modes has attracted much attention to investigate whether the PBT could be completely replaced. At the same time, task difficulty is always a key element to reflect test takers’ performances. Numerous studies have laid a solid foundation and guidance about the comparative study of test takers’ performance on CBT and PBT, but there still remains a scarcity from the perspective of task difficulties with different Common European Framework of Reference for Languages (CEFR) task levels in particular.&#x0D; This study, therefore, compared the test takers’ performance on both CBT and PBT across tasks with different CEFR levels. A total of 289 principal recommended high school test takers from Macau took the pilot Test of Academic English (TAE) at a local university. The results indicated that there was a difference between test takers’ performance on different test modes across different CEFR levels, but only CEFR A2 level showed a statistically difference between CBT and PBT. And since science and technology are continuously developing, it is essential for the university to consider switching the test mode from PBT to CBT.

More from: Studies in Second Language Acquisition
  • Research Article
  • 10.1017/s0272263125101368
Comparing early-stage L2 processing of derived and inflected words: a conceptual replication of Jacob et al. (2018) with Chinese learners of L2 English
  • Oct 13, 2025
  • Studies in Second Language Acquisition
  • Zhaohong Wu + 1 more

  • Research Article
  • 10.1017/s0272263125101289
The Impact of L1 Speaking Style, Task Mode, and L2 Proficiency on L2 Fluency: A Within-subject Study of Monologic and Dialogic Speech
  • Oct 8, 2025
  • Studies in Second Language Acquisition
  • Pauliina Peltonen + 2 more

  • Research Article
  • 10.1017/s0272263125101320
The love factor in variationist SLA
  • Oct 6, 2025
  • Studies in Second Language Acquisition
  • Mason A Wirtz

  • Research Article
  • 10.1017/s0272263125101290
Timing matters for interactive task-based learning
  • Oct 6, 2025
  • Studies in Second Language Acquisition
  • Yuichi Suzuki + 5 more

  • Research Article
  • 10.1017/s0272263125101307
The role of statistical learning in the L2 acquisition and use of nonadjacent predicate-argument constructions
  • Sep 29, 2025
  • Studies in Second Language Acquisition
  • Jiaqi Feng Guo + 1 more

  • Research Article
  • 10.1017/s0272263125101125
Aligning linguistic complexity with the difficulty of English texts for L2 learners based on CEFR levels
  • Sep 3, 2025
  • Studies in Second Language Acquisition
  • Xiaopeng Zhang + 1 more

  • Research Article
  • 10.1017/s0272263125101095
Saving the reliability of inhibitory control measures? An extension of Huensch (2024) and Hui and Wu (2024)
  • Aug 12, 2025
  • Studies in Second Language Acquisition
  • Zhiyi Wu + 2 more

  • Research Article
  • 10.1017/s0272263125100995
Usage-based analysis of L2 oral proficiency: Characteristics of argument structure construction use
  • Aug 4, 2025
  • Studies in Second Language Acquisition
  • Hakyung Sung + 1 more

  • Research Article
  • 10.1017/s0272263125101046
Modeling relationships between learning conditions, processes, and outcomes: An introduction to mediation analysis in SLA research – CORRIGENDUM
  • Aug 1, 2025
  • Studies in Second Language Acquisition
  • Ruirui Jia + 1 more

  • Research Article
  • 10.1017/s0272263125101034
Optimizing distributed practice online: A conceptual replication of Cepeda et al. (2009) – ADDENDUM
  • Aug 1, 2025
  • Studies in Second Language Acquisition
  • John Rogers + 2 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon