Abstract
Abstract This paper discusses the way the concept of culture is discursively constructed by large language models that are trained on massive collections of cultural artefacts and designed to produce probabilistic representations of culture based on this training data. It makes the argument that, no matter how ‘diverse’ their training data is, large language models will always be prone to stereotyping and oversimplification because of the mathematical models that underpin their operations. Efforts to build ‘guardrails’ into systems to reduce their tendency to stereotype can often result in the opposite problem, with issues around culture and ethnicity being ‘invisiblised’. To illustrate this, examples are provided of the stereotypical linguistic styles and cultural attitudes models produce when asked to portray different kinds of ‘persona’. The tendency of large language models to gravitate towards cultural and linguistic generalities is contrasted with trends in intercultural communication towards more fluid, socially situated understandings of interculturality, and implications for the future of cultural representation are discussed.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.