Abstract

The present study is aimed at validating the efficacy of ChatGPT based on its explanations of grammatical concepts. Given the fact that more and more English teachers refer to ChatGPT for the purpose of judging grammaticality, this study attempts to probe the potentials and challenges of ChatGPT for English education and assessment. ChatGPT’s explanations of 140 paired sentences from 11 grammatical categories including sentence types, verbs, verbals, passive voice, subjunctive mood, relatives, special structures, conjunctions, nouns-articles, adjectives-adverbs, and prepositions were collected and examined to identify the extent to which the information was valid. The results were then compared against such authoritative corpus resources as the Corpus of Contemporary American English (COCA) and Google Books Ngram Viewer (GBNV). The results revealed the average accuracy of the explanations provided by ChatGPT was approximately 73%. While some grammatical explanations of ChatGPT were accurate, 27% of explanations included misleading or even self-contradictory information. According to the survey, the majority of teachers think somewhat favorably of ChatGPT’s potential for teaching and testing, but they are aware of its inherent inaccurate information. Overall, the findings suggest that even ChatGPT’s state-of-the-art technology has yet to meet the adequacy of grammaticality for teaching and testing and that it be supplemented by authoritative English corpora.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call