Abstract

Large Language Models (LLMs) such as ChatGPT are likely to amplify epistemic injustice through the lack of transparency and traceability of data sources. The unethical alienation of original knowledge producers from their intellectual products, which are repackaged by LLMs as artificial intelligence, conceals power asymmetries in the global knowledge production and dissemination system. As elaborated by Miranda Fricker (2010), Western White male actors traditionally dominate knowledge production; therefore, ChatGPT and other LLMs are inclined to reproduce patriarchal perspectives as universal understandings of the World. Our commentary applies this logic to accounting practice and research in Africa, and asserts that epistemic injustice, resulting from colonization and racism, means that ontological and epistemological approaches situated in the accounting needs and experiences of African communities are missing from or poorly articulated by ChatGPT and other LLMs. If LLMs are to attain legitimacy as (ethical) sources of knowledge, regulation must be enforced to ensure transparency—as a foundation for promoting pluriversality and eliminating epistemic injustice.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call