Abstract
A number of recent models of human information processing have been based on connectionist architectures. Such models are designed to illustrate specific psychological principles and are usually implemented in small networks. The assumption implicit in the work is that principles illustrated in small networks can be applied easily to brain-size networks by scaling up as required. To consider the scaling question, we used a back-propagation algorithm to compare learning in both large and small networks and found that learning depended on the size of the network. In small networks, increasing η (the rate-of-learning parameter) beyond 1 increased the rate of learning; in large networks, the same manipulation reduced the rate of learning. The example illustrates the difficulty of generalizing across network size and calls into question the assumption that principles illustrated in small networks can be applied to the brain by expansion of the network.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have