Abstract

This paper investigates the architectural requirements for simulating neural networks using massively parallel multiprocessors. First, we model the connectivity patterns in large neural networks. A distributed processor/memory organization is developed for efficiently simulating asynchronous, value-passing connectionist models. On the basis of the network connectivity and mapping policy, we estimate the volume of messages that need to be exchanged among physical processors for simulating the weighted connections of a neural network. This helps determine the interprocessor communication bandwidth required, and the optimal number and granularity of processors needed to meet a particular cost/performance goal. The suitability of existing computers is assessed in the light of estimated architectural demands. The structural model offers an efficient methodology for mapping virtual neural networks onto a real parallel computer. It makes possible the execution of large-scale neural networks on a moderately sized multiprocessor. These mapping techniques are useful to both architects of new-generation computers and researchers on neural networks and their applications. Until the technology for direct hardware implementation of large neural networks is available, simulation remains the most viable alternative for the connectionist community.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.