Abstract

In this paper, we first describe a model for mapping the backpropagation artificial neural net learning algorithm onto a massively parallel computer architecture with a 2D-grid communications network. We then show how this model can be sped up by hypercube inter-processor connections that provide logarithmic time segmented parallel prefix operations. This approach can serve as a general model for implementing algorithms for layered neural nets on any massively parallel computers that have 2D-grid or hypercube communication networks. We have implemented this model on the Connection Machine CM-2 — a general purpose, massively parallel computer with a hypercube topology. Initial tests show that this implementation offers about 180 million interconnections per second (IPS) for feed-forward computation and 40 million weight updates per second (WUPS) for learning. We use our model to evaluate this implementation: what machine-specific features have helped improve the performance and where further improvements can be made.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.