Abstract

Abstract Currently, GPGPU-Sim has become an important vehicle for academic architecture research. It is a cycle-accurate simulator that models the contemporary graphics processing unit. Machine learning has now been widely used in various applications such as self-driving car, mobile devices, and medication. With the popularity of mobile devices, mobile vendors are interested on porting machine learning or deep learning applications from computers to mobile devices. Google has developed TensorFlow Lite and Android NNAPI for mobile and embedded devices. Since machine learning and deep learning are very computationally intensive, the energy consumption has become a serious problem in mobile devices. Moreover, Moore’s law cannot last forever. Hence, the performance of the mobile device and computers such as desktops or servers will have limited enhancements in the foreseeable future. Therefore, the performance and the energy consumption are two issues of great concern. In this paper, we proposed a new data type, fixed-point, which is a low-power numerical data type that can reduce energy consumption and enhance performance in machine learning applications. We implemented the fixed-point instructions in the GPGPU-Sim simulator and observed the energy consumption and performance. Our evaluation demonstrates that by using the fixed-point instructions, the proposed design exhibits improved energy savings. Our experiment indicate that the use of fixed-point data type saves at least 14% of total GPU energy consumption than floating-point data type.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call