Abstract

Implementations of neural networks on programmable massively parallel computers are addressed. The methods are based on a graph theoretic approach and are applicable to a large class of networks in which the computations can be described by means of matrix and vector operations. A detailed characterization of the target machine is provided. Two mappings are presented. The first is designed for a processor array consisting of a very large number of small processing units. The neurons and the nonzero synaptic weights are assigned to the processors in a predetermined order, one per processor. The data transfers between processors containing neurons and weights are implemented using a novel routing algorithm. The second mapping is designed for the data array of size N*N and a smaller processor array of size P*P, P >

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.