Abstract
This paper introduces an energy-efficient design method for Deep Neural Network (DNN) accelerator. Although GPU is widely used for DNN acceleration, its huge power consumption limits practical usage on mobile devices. Recent DNN accelerators are dedicated to high energy-efficiency to realize real-time DNN acceleration with low power consumption. But a hardware-oriented algorithm is essential for realistic implementation. Therefore, various techniques of network compression are applied with the DNN accelerators that utilize several schemes to reduce computational complexity in trade of accuracy loss. This work studies the optimization schemes and presents a DNN accelerator architecture by hardware-software co-optimization.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.