Abstract

The study made an exploration of the feedback quality of an Automated Writing Evaluation system (AWE) Pigai, which has been widely applied in English teaching and learning in China. The study not only focused on the diagnostic precision of the feedback but also investigated the students’ perceptions of the feedback use in their daily writing practices. Taking 104 university students’ final exam essays as the research materials, the paired sample t-test was conducted to compare the mean number of errors identified by Pigai and professional teachers. It was found that Pigai feedback could not so well diagnose the essays as the human feedback given by the experienced teachers, however, it was quite competent in identifying lexical errors. The analysis of students’ perceptions indicated that most students thought Pigai feedback was multi-functional, but it was inadequate in identifying the collocation errors and giving suggestions in syntactic use. The implications and limitations of the study were discussed at the end of the paper.

Highlights

  • With the development of computer and information science, the automated writing evaluation systems (AWE) have been drawing more and more attention from researchers in language teaching and assessment

  • Since China is a country with almost the largest population of English as a foreign language (EFL) leaners, the study investigating the feedback quality of the AWE that is widely applied in China is of vital importance

  • Pigai could not identify the logic problems at all, which were found to be common in the students’ essays by both of the two teachers. This corroborated the research of Dikli and Bleyle [8] in that AWE feedback were merely effective in reflecting the errors related to the lower-order language skills while less helpful in revealing the deficiency in managing higher-order skills

Read more

Summary

Introduction

With the development of computer and information science, the automated writing evaluation systems (AWE) have been drawing more and more attention from researchers in language teaching and assessment. Since it has many advantages over human assessment in its high efficiency, high consistency, and low cost [1,2], many high-stakes tests have included AWE in their rating process. The current study is dedicated to evaluating the quality of the feedback provided by Pigai, aiming to offer suggestions for the users of this system and inspire more empirical research in validating the effectiveness of AWE widely applied in EFL countries

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.