Abstract

We present a parallel algorithm which trains artificial neural networks on the TUTNC (Tampere University of Technology NeuroComputer) parallel tree shape neurocomputer. The neurocomputer is designed for parallel computation and it is suitable for several artificial neural network models. Detailed mathematical notations are used to describe the parallel back-propagation algorithm. Performance analyses show that we can effectively utilize both broadcasting and global adding between processing units in the presented parallel implementation. By using the given algorithm it is possible to add more processing units to the system without any change in the number of communication transactions required.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call