Abstract

Background: Neural networks are relatively crude computerized model based on the neural system of the human brain. The Complex problems may require sophisticated processing techniques to achieve practical speed. In human brain millions of neurons form a massively parallel information system. Method: A neural network is a parallel and distributed process. The parallel execution of the neural network is achieved up to the level of the training process. Parallelization approaches may work well on hardware implementations, a software package (SPANN), special purpose hardware and multicore CPUs through MPI. Findings: In order to achieve parallelism and speed up the training process, each neuron and full neural network is duplicated to multiple threads. In modern Microprocessor number of cores is rapidly increasing. So high-performance computing is a great challenge for the application developers. Improvements: Our future work directs to distribute neurons to multiple threads for parallel execution and duplicate the full neural network for parallel training.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.