Abstract

Summary form only given, as follows. Neural models of computing are defined in terms of large numbers of interconnected neuron-like units. These models have been implemented on various parallel processors, employing relatively coarse-grained parallelism at the level of neurons or groups of neurons. The authors present a novel algorithm for parallelism at the synaptic level on fine-grained mesh-connected systolic arrays. The resulting system is shown to perform extremely well, computing at the rate of 300 million connections per second during generalized delta rule learning for a multilayered neural network. >

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call