Abstract

SUMMARYRecently, the technology called General‐Purpose computing on Graphics Processing Unit (GPGPU), which treats not only graphic processing but also general purpose calculation by using GPU, has been investigated because the GPU has higher performance than the CPU for the development of 3DCG or movie processing. GPU has dedicated circuits to draw graphics, so that it has the characteristic that the many simple arithmetic circuits are implemented. This characteristic has promise for applications not only to graphic processing but also to massive parallelism. In this research, we apply the technology to neural network learning, a form of intelligent signal processing. As conventional research, we proposed three methods of speeding up neural network learning. One of the methods, parallelization of pattern processing, has points that should be improved. In this paper, we report that the updating of the weight coefficients in the neurons is processed simultaneously by changing the order of the pattern calculations. The proposed calculation method is evaluated against test data sets. The results confirm that the proposed method converges similarly to the conventional method. We also propose an optimal implementation method for the GPU. This proposed method is found to be three to six times faster than the conventional method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.