Abstract

Recently, the integration of linguistics and technology has been promoted and widely used in the field of linguistics and English writing research for several purposes. One of those purposes is to evaluate English as a Foreign Language (EFL) writing ability by using electronic assessment tools. In the current study, an automated writing evaluation tool (Coh-Metrix) was used to indicate English-major students’ writing performances based on the discourse components of the texts. The English texts generated for each writing task on two different topics were collected. The corpus analyses gathered from Coh-Metrix identified linguistic and discourse features that were interpreted to determine the 40 EFL undergraduate students’ English writing abilities. The students wrote and revised their essays in hand-written essays in class and resubmitted their essays in digital forms with corrections made. The results showed that these students demonstrated linguistic flexibility across writing assignments that they produced. The analyses also indicated that the length of the texts and the uses of the word concreteness, and the referential and deep cohesion had impacts on the students’ writing performances across the writing tasks. Besides, the findings suggest practical value in using an automated text analysis to support teachers’ instructional decisions that could help to identify improvement of students’ writing skill.

Highlights

  • The uses and effects of automated tools in analyzing students’ writing abilities have been investigated for several years (Buckingham Shum, Sándor, Goldsmith, Bass, &McWilliams, 2017; Haswell, 2000; Ullmann, 2019)

  • As previously stated, the texts were analyzed for the number of words and the five discourse components of the students’ texts: narrativity, syntactic simplicity, word concreteness, referential cohesion, and deep cohesion in response to the research questions

  • The statistical data on the number of words and the other five discourse components are presented in following sections

Read more

Summary

Introduction

The uses and effects of automated tools in analyzing students’ writing abilities have been investigated for several years (Buckingham Shum, Sándor, Goldsmith, Bass, &McWilliams, 2017; Haswell, 2000; Ullmann, 2019). Recent advances in computational linguistics and discourse processing have shown possibilities that educators or researchers can automate many language- and text-processing mechanisms. Several studies have employed automated text analysis methods for educational contexts especially in the area of writing assessment (Ullmann, 2019; Wilson & Czik, 2016) and discourse analysis (Ferretti & Lewis, 2019). While analyzing text manually poses constraints on writing pedagogy and research, such as, adding time and effort and limiting large-scale research explorations, automated methods tend to come to replace as they have several benefits for writing instruction and research, such as saving time, receiving immediate feedback, and covering multi-language texts (Deane, 2013; Humphreys & Wang, 2017; Ullmann, 2019). Contributing importantly to a discourse model, an automated computational linguistic tool was used to draw inferences about students’ English writing performances

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call