Abstract

Recently, graphics processing units (GPUs) are widely used for accelerating general purpose workloads using programming models such as open computing language (OpenCL) or compute unified device architecture (CUDA). In this paper, we accelerated the Artificial Neural Network (ANN) algorithm, one of the popular algorithm in machine learning and cognitive science, since the ANN algorithm needs to be faster for solving more complex problem or operating in real-time. The ANN algorithm has great potential for GPU acceleration since it is constructed with large data-parallel computations. We implemented forwarding computation of ANN in CUDA and optimized it using scratchpad memory of GPUs and leveraging the thread block size. As a results, our method shows 2.32 times faster performance compared to conventional CPU.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call