Abstract

Massive data parallelism can be achieved by using general-purpose graphics processing units (GPGPU) with the help of the OpenCL framework. When smaller data with higher GPU memory is executed, it results in a low resource utilization ratio and energy inefficiencies. Up until now, there is no existing model to share GPU for further execution. In addition, if the kernel pair requires the same computation resource, then kernel merging also results in a significant increase in execution time. Therefore, optimal device selection, as well as kernel merging, can significantly speed up the execution performance for a batch of jobs. This paper proposes a kernel merging method that leads to high GPU occupancy. As a result, it reduces execution time and increases GPU utilization. Additionally, a machine learning (ML)-based GPU sharing mechanism is presented to select pairs of kernels in OpenCL frameworks. The model first selects suitable architecture for the jobs and then merges GPU kernels for better resource utilization. From all the GPU candidates, the optimal pair of the kernel concerning data size is selected. The experimental results show that the developed model can achieve 0.91 F1-measure for device selection and 0.98 for the scheduling scheme of kernel merging.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.