Abstract

After briefly reviewing the appealing psychological properties of PDP systems, an introduction to their historical roots and basic computational mechanisms are provided. A variety of network architectures are described including one-layered perceptrons, backpropagation networks, Boltzmann machines and recurrent systems. Three PDP simulations are analysed: First, a model that purports to learn the past tense of English verbs; Second, a constraint satisfaction network which is able to interpret the alternative configurations of a Necker cube; Finally, a recurrent network which is able to decipher membership of grammatical classes from word-order information. The notion that PDP approaches provide a sub-symbolic account of cognitive processes, in contrast to theclassical symbolic view, is examined. The article concludes with brief speculation concerning the explanatory power of PDP systems at the cognitive level of functioning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call