Abstract

One of the central claims associated with the parallel distributed processing approach popularized by D.E. Rumelhart, J.L. McClelland and the PDP Research Group is that knowledge is coded in a distributed fashion. Localist representations within this perspective are widely rejected. It is important to note, however, that connectionist networks can learn localist representations and many connectionist models depend on localist coding for their functioning. Accordingly, a commitment to distributed representations should be considered a specific theoretical claim regarding the structure of knowledge rather than a core principle, as often assumed. In this paper, it is argued that there are fundamental computational and empirical challenges that have not yet been addressed by distributed connectionist theories that are readily accommodated within localist approaches. This is highlighted in the context of modeling word and nonword naming, the domain in which some of the strongest claims have been made. It is shown that current PDP models provide a poor account of naming monosyllable items, and that distributed representations make it difficult for these models to scale up to more complex language phenomena. At the same time, models that learn localist representations are shown to hold promise in supporting many of the core reading and language functions on which PDP models fail. It is concluded that the common rejection of localist coding schemes within connectionist architectures is premature.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call