Abstract

The computational cost of training artificial neural network (ANN) algorithms limits the use of large systems capable of processing complex problems. Implementing ANNs on a parallel or distributed platform to improve performance is therefore desirable. This work illustrates a method to predict and evaluate the performance of distributed ANN algorithms by analyzing the performance of the comparatively simple mathematical operations, which are used to construct the ANN. The ANN algorithms are divided into simple components: matrix and vector multiplication, matrix processed through a function, competition in a matrix. These basic operational parts are examined individually and it is demonstrated that the computation processes of distributed neural networks can be derived from the composition of these basic operations. Three popular network architectures are examined: multi-layer perceptrons with back-propagation learning, self-organizing map, and radial basis functions network.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call