Abstract
The Graphics Processing Unit (GPU) have been used for accelerating graphic calculations as well as for developing more general devices. One of the most used parallel platform is Compute Unified Device Architecture (CUDA). This one allows to implement in parallel multiple GPU obtaining a high computational performance. Over the last years, CUDA has been used for the implementation of several parallel distributed systems. At the end of the 80s, it was introduced a stochastic neural network named Random Neural Networks (RNN). The method have been successfully used in the Machine Learning community for solving many learning tasks. In this paper we present the gradient descent algorithm for the RNN model in CUDA. We evaluate the performance of the algorithm on two real benchmark problems about energy sources, and we compare it with the obtained using a classic implementation in C.KeywordsRandom Neural NetworkParallel ComputingCUDAGradient Descent Algorithm
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.