Abstract

A vast amount of activation values of DNNs are zeros due to ReLU (Rectified Linear Unit), which is one of the most common activation functions used in modern neural networks. Since ReLU outputs zero for all negative inputs, the inputs to ReLU do not need to be determined exactly as long as they are negative. However, many accelerators usually do not consider such aspects of DNNs, losing a huge amount of opportunities for speedups and energy savings. To exploit such opportunities, we propose early negative detection (END), a computation pruning technique that detects the negative results at an early stage. The key to the early negative detection is the adoption of inverted two's complement representation for filter parameters. This ensures that as soon as the intermediate results become negative, the final results are guaranteed to be negative. Upon detection, the remaining computation can be skipped and the following ReLU output can be simply set to zero. We also propose a DNN accelerator architecture (ComPreEND) that takes advantage of such skipping. ComPreEND with END significantly improves both the energy efficiency and the performance according to the evaluation. Compared to the baseline, we obtain 20.5 and 29.3 percent speedup with accurate mode and predictive mode, and energy savings by 28.4 and 41.4 percent, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.