Abstract

ChatGPT generated much dialogue on the implications of large language models (LLMs) for language teaching and learning. Since language teachers are uniquely positioned to teach metalinguistic awareness, they can support their learners’ understanding of how LLMs are shaped by language ideologies and how their outputs are indexical of social power. This awareness would help learners be more conscientious in using LLMs, deciding how to interact with them and adapt their outputs for their purposes. This article introduces LLMs as statistical systems that predict linguistic forms. It surfaces two language ideologies that have shaped their development: the belief in the separability of language from its social contexts and the belief in the value of larger text corpora. It also highlights some ideological effects including uneven language performance, text outputs that reflect biases, privacy violations, circulation of copyrighted materials, misinformation, and hallucinations. Some suggestions for mitigating these effects are offered.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call