Abstract

With the platforms of running deep neural networks (DNNs) move from large-scale data centers to handheld devices, power emerge as one of the most significant obstacles. Voltage scaling is a promising technique that enables power saving. Nevertheless, it raises reliability and performance concerns that may undesirably deteriorate NNs accuracy and performance. Consequently, an energy-efficient and reliable scheme is required for NNs to balance the above three aspects with satisfied user experience. To this end, we propose a neuron-level voltage scaling framework called NN-APP to model the impact of supply voltages on NNs from output accuracy (A), power (P), and performance (P) perspectives. We analyze the error propagation characteristics in NNs at both inter- and intra-network layers to precisely model the impact of voltage scaling on the final output accuracy at neuron-level. Furthermore, we combine a voltage clustering method and the multi-objective optimization to identify the optimal voltage islands and apply the same voltage to neurons with similar fault tolerance capability. We perform three case studies to demonstrate the efficacy of the proposed techniques.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.