Abstract

As AI systems become increasingly competent language users, it is an apt moment to consider what it would take for machines to understand human languages. This paper considers whether either language models such as GPT-3 or chatbots might be able to understand language, focusing on the question of whether they could possess the relevant concepts. A significant obstacle is that systems of both kinds interact with the world only through text, and thus seem ill-suited to understanding utterances concerning the concrete objects and properties which human language often describes. Language models cannot understand human languages because they perform only linguistic tasks, and therefore cannot represent such objects and properties. However, chatbots may perform tasks concerning the non-linguistic world, so they are better candidates for understanding. Chatbots can also possess the concepts necessary to understand human languages, despite their lack of perceptual contact with the world, due to the language-mediated concept-sharing described by social externalism about mental content.

Highlights

  • Babylon Health, a London-based private healthcare company, claims that their AI can ‘understand and recognise the unique way that humans express their symptoms’

  • Lake and Murphy argue that language models lack semantic knowledge on the grounds that the representations that underlie their use of language are not suitable for supporting uses such as describing salient features of the environment, forming accurate representations of the world on the basis of linguistic input, and choosing linguistic outputs so as to achieve goals

  • Lake and Murphy argue that language models do not represent the right information in connection with the word ‘skin’, in the right way, to possess semantic knowledge; my claim is that they do not employ representations which refer to the skin

Read more

Summary

Introduction

Babylon Health, a London-based private healthcare company, claims that their AI can ‘understand and recognise the unique way that humans express their symptoms’ (babylonhealth.com/ai; accessed 26 June 2020). The view which I will argue for here is that chatbots can share our concepts in the manner required to understand human languages, but language models such as GPT-3 cannot The reason for this difference is that they have different functions. Lake and Murphy argue that language models lack semantic knowledge on the grounds that the representations that underlie their use of language are not suitable for supporting uses such as describing salient features of the environment, forming accurate representations of the world on the basis of linguistic input, and choosing linguistic outputs so as to achieve goals This deficiency is partly a result of language models’ function, it is partly a result of the way in which they perform it—one could imagine a next-word-prediction system which worked in a much more human-like way. Lake and Murphy argue that language models do not represent the right information in connection with the word ‘skin’, in the right way, to possess (or model) semantic knowledge; my claim is that they do not employ representations which refer to the skin

Concepts and Systematicity
Social Externalism and Conceptual Content
Conceptions and Cognitive Significance
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.