Abstract

Abstract In metalexicographical research, experts have judged the performance of technologies such as OpenAI Generative Pretrained Transformer (GPT) in lexicographic production tasks as promising yet inferior to human lexicographers. It remains unclear whether this perceived inferiority limits the effectiveness of AI-generated lexicography in resolving practical language doubts. Accordingly, this study compares the effectiveness of AI-generated definitions to those from the Macmillan English Dictionary (MED) in resolving vocabulary doubts in a multiple-choice reading task designed to test lexical knowledge. It involves 43 L2 English users in the third year of an English studies degree at a Spanish university. Students provided with MED definitions performed better on the reading task than those without access to definitions. However, there was no significant difference between the performance of students with either MED definitions or without definitions altogether, and those provided with AI-definitions. The implications of these findings are discussed along with avenues for further research.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.