Abstract
One way for implementing a parallel back-propagation algorithm is based on distributing the examples to be learned among different processors. This method provides with spectacular speedups for each epoch of back-propagation learning, but it shows a major drawback: parallelization implies alterations of the gradient descent algorithm. This paper presents an implementation of this parallel algorithm on a transputer network. It mentions experimental laws about the back-propagation convergence speed, and claims that optimal conditions still exist for performing an actual speedup by implementing such a parallel algorithm. It points out theoretical and experimental optimal conditions, in terms of the number of processors and the size of the example packets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have