Abstract

Abstract In metalexicographical research, experts have judged the performance of technologies such as OpenAI Generative Pretrained Transformer (GPT) in lexicographic production tasks as promising yet inferior to human lexicographers. It remains unclear whether this perceived inferiority limits the effectiveness of AI-generated lexicography in resolving practical language doubts. Accordingly, this study compares the effectiveness of AI-generated definitions to those from the Macmillan English Dictionary (MED) in resolving vocabulary doubts in a multiple-choice reading task designed to test lexical knowledge. It involves 43 L2 English users in the third year of an English studies degree at a Spanish university. Students provided with MED definitions performed better on the reading task than those without access to definitions. However, there was no significant difference between the performance of students with either MED definitions or without definitions altogether, and those provided with AI-definitions. The implications of these findings are discussed along with avenues for further research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call