Abstract

Modern deep learning architectures are computationally expensive to train due to their size, complexity, and the data they typically train on. There are different approaches to accelerate the training process. Using graphics processing units (GPUs) to accelerate deep learning is an increasingly effective and accessible way for researchers to train large neural networks in a fraction of the time. In this paper, we trained neural networks on High-Performance Computing Cluster (HPCC) Systems Platform using GPU acceleration with a novel implementation that combines widely used neural network libraries and the HPCC Systems Platform. This is the first work that uses GPU acceleration on HPCC Systems and is the first to demonstrate the effectiveness of using single and multiple GPUs with HPCC Systems. We provide experiments that measure the performance increase in training time as compared to only using central processing units (CPUs). The experiments trained a convolutional neural network on image data and a multilayer perceptron trained on a Medicare fraud detection dataset, using both single and multiple GPUs to accelerate the training process. Our results show that although GPU usage does not always guarantee a significant performance increase, using one or more GPUs does generally decrease the required training time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call