Abstract

In this response to commentators, I agree with those who suggested that the distinction between exemplar- and abstraction-based accounts is something of a false dichotomy and therefore move to an abstractions-made-of-exemplars account under which (a) we store all the exemplars that we hear (subject to attention, decay, interference, etc.) but (b) in the service of language use, re-represent these exemplars at multiple levels of abstraction, as simulated by computational neural-network models such as BERT, ELMo and GPT-3. Whilst I maintain that traditional linguistic abstractions (e.g. a DETERMINER category; SUBJECT VERB OBJECT word order) are no more than human-readable approximations of the type of abstractions formed by both human and artificial multiple-layer networks, I express hope that the abstractions-made-of-exemplars position can point the way towards a truce in the language acquisition wars: We were all right all along, just focusing on different levels of abstraction.

Highlights

  • Let’s not split hairs, since the whole point of the modified account that I sketch here is to collapse the exemplar–abstraction distinction. The gist of this modified account is that, yes, we store all the exemplars that we hear, but that – in the service of language use – these exemplars are re-represented in such a way as to constitute abstractions ( ‘Abstractions made of exemplars’; see Lieven et al.’s [2020] claim that ‘children generalise at multiple levels of granularity’)

  • As we will see in more detail shortly, a useful metaphor for this account is a multiple-level connectionist neural network that stores every exemplar, re-representing it in increasingly abstract ways as we move up the hidden layers

  • The point I was overlooking was this: if we store abstractions at multiple levels simultaneously, it doesn’t matter if the highest-level abstractions don’t explain every case; exemplars and lower-level abstractions are there to take up the slack

Read more

Summary

Word meanings

I was surprised to see that not one of the commentators took issue with the central claim of this section of the original target article: that word meanings are structured as exemplars, rather than as prototype categories based around a central meaning. I am not entirely persuaded by these findings, given that they relate to categorization problems that seem to rely on fairly explicit verbalizable knowledge (e.g. that, despite their appearance, dolphins are mammals, not fish), rather than to naturalistic word-learning in young children (see Brooks & Kempe, 2020 on the issue of explicitness). None of this is to say that I reject the idea of word- and concept-level abstractions entirely, as I did in my original target article. The model will form abstractions across distinct uses of table only when doing so aids it in its masked-word-prediction task

Morphologically inflected words
Phonetics and phonology
Bringing it all together
Where do we go from here?
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call