Abstract

Executing a sequential implementation of an Neural Network (NN) generally induces a high cost in terms of computation time, including a large part for the learning phase. This cost together with the weak computation performance of a computer with regard to a human brain [3] make it difficult to test some complex large connectionist models, like some inspired from biological reality or some with a high dimensional input space or a large number of units. Moreover, NNs present a large amount of natural parallelism. Unfortunately, NN parallelism is very different from modern and general purpose parallel computer parallelism. NNs have a fine grain of parallelism and a natural message passing paradigm. On the opposite, modern parallel computers have a MIMD1 architecture. Some recent hardware development led to use shared memory as an efficient parallel programming way with high number of processors. The main goals of this project are to speed up NN executions and to decrease development time of parallel NN implementations, in order to quickly implement various kinds of NN and to run on parallel computers more complex simulations than sequential computers allow. We offer to connectionists a tool to develop their models with fine grain parallelism and to execute them onto DSM2 MIMD general purpose parallel computers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call