Abstract

Abstract Research communities across disciplines recognize the need to diversify and decolonize knowledge. While artificial intelligence-supported large language models (LLMs) can help with access to knowledge generated in the Global North and demystify publication practices, they are still biased toward dominant norms and knowledge paradigms. LLMs lack agency, metacognition, knowledge of the local context, and understanding of how the human language works. These limitations raise doubts regarding their ability to develop the kind of rhetorical flexibility that is necessary for adapting writing to ever-changing contexts and demands. Thus, LLMs are likely to drive both language use and knowledge construction towards homogeneity and uniformity, reproducing already existing biases and structural inequalities. Since their output is based on shallow statistical associations, what these models are unable to achieve to the same extent as humans is linguistic creativity, particularly across languages, registers, and styles. This is the area where key stakeholders in academic publishing—authors, reviewers, and editors—have the upper hand, as our applied linguistics community strives to increase multilingual practices in knowledge production.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call