Abstract
ChatGPT generated much dialogue on the implications of large language models (LLMs) for language teaching and learning. Since language teachers are uniquely positioned to teach metalinguistic awareness, they can support their learners’ understanding of how LLMs are shaped by language ideologies and how their outputs are indexical of social power. This awareness would help learners be more conscientious in using LLMs, deciding how to interact with them and adapt their outputs for their purposes. This article introduces LLMs as statistical systems that predict linguistic forms. It surfaces two language ideologies that have shaped their development: the belief in the separability of language from its social contexts and the belief in the value of larger text corpora. It also highlights some ideological effects including uneven language performance, text outputs that reflect biases, privacy violations, circulation of copyrighted materials, misinformation, and hallucinations. Some suggestions for mitigating these effects are offered.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Working papers in Applied Linguistics and Linguistics at York
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.