Abstract

Recent criticism of large language models (LLM) and generative AI have focused on the way that these applications are little more than Stochostic Parrots' technological devices that generate seemingly intelligible statements but do not know and cannot understand a word of what they say. If the terms of these evaluations sound familiar, they should. They are rooted in foundational concepts regarding language and technology that have been definitive of Western systems of knowing since the time of Plato. The current crop of critical correctives and well-intended LLM hype-reduction efforts reproduce—or one might be tempted to say "parrot" this ancient wisdom. And it works, precisely because it just sounds like good common sense. But that's the problem. This presentation takes aim at this largely unquestioned theoretical framework, identifies its inherent limitations and inabilities to accurately understanding the opportunities and challenges of LLMs, and conclude by providing a more robust method for responding to and taking responsibility for these technological innovations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.