Abstract

GPGPUs no longer seem to be an inconsequential component of supercomputing architectures, and a section of HPC application developers no longer refrain from utilizing GPGPUs. CUDA, in general, has remained a successful computing platform for those architectures. Thousands of scientific applications from various domains, such, as bio-informatics, HEP, and so forth, have been accelerated using CUDA in the past few years. In fact, the energy consumption issue still remains a serious challenge for the HPC and GPGPU communities. This paper proposes energy prediction approaches using dynamic regression models, such as parallel dynamic random forest modeling (P-DynRFM), dynamic random forest modeling (DynRFM), dynamic support vector machines (DynSVM), and dynamic linear regression modeling (DynLRM). These models identify energy efficient CUDA application instances while considering the block size, grid size, and the other tunable parameters, such as problem size. The predictions of CUDA application instances have been attained by executing a few CUDA application instances and predicting the other CUDA application instances based on the performance metrics of applications, such as number of instructions, memory issues, and so forth. The proposed energy prediction mechanisms were evaluated with CUDA applications such as Nbody and Particle Simulations on two GPGPU machines. The proposed dynamic prediction mechanisms achieved a 50.26 to 61.23 percentage of energy/performance prediction improvements when compared to the classical prediction models; and, the parallel implementation of the dynamic RFM (P-DynRFM) recorded over 83 percentage points of prediction time improvements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call