This study examines the effects of corrective feedback (CF) on language learners’ writing anxiety, writing complexity, fluency, and accuracy, and compares the effectiveness of feedback from human teachers with an AI-driven application called Poe. The study included three intact classes, each with 25 language learners. Using a quasi-experimental design with pretest and posttest measures, one class received feedback from the teacher, one from the Poe application, and the third received no response to their writing. Data were generated though tests and a writing anxiety scale developed for the study. Data analysis, conducted using one-way ANOVA tests, revealed significant effects of teacher and AI-generated feedback on learners’ writing anxiety, accuracy, and fluency. Interestingly, the group that received AI-generated feedback performed better than the group that received teacher feedback or no AI support. Additionally, learners in the AI-generated feedback group experienced a more significant reduction in writing anxiety than their peers. These results highlight the remarkable impact of AI-generated CF on improving writing outcomes and alleviating anxiety in undergraduate language learners at East China University of Political Science and Law . The study demonstrates the benefits of integrating AI applications into language learning contexts, particularly by promoting a supportive environment for students to develop writing skills. Educators, researchers, and developers can use these findings to inform pedagogical practices and technological interventions to optimize the language learning experience in primary school settings. This research highlights the effectiveness of AI-driven applications in language teaching. It highlights the importance of considering learners’ psychological well-being, particularly anxiety levels, when developing effective language learning interventions.
Read full abstract