Abstract

More recently, it has become possible to run deep learning algorithms on edge devices such as microcontrollers due to continuous improvements in neural network optimization algorithms such as quantization and neural architecture search. Nonetheless, most of the embedded hardware available today still falls short of the requirements of running deep neural networks. As a result, specialized processors have emerged to improve the inference efficiency of deep learning algorithms. However, most are not for edge applications that require efficient and low-cost hardware. Therefore, we design and prototype a low-cost configurable sparse Neural Processing Unit (NPU). The NPU has a built-in buffer and a reshapable mixed-precision multiply-accumulator (MAC) array. The computing and memory resources of the NPU are parameterized, and different NPUs can be derived. Besides, users can also conFigure the NPU at runtime to fully utilize the resources. In our experiments, the 200MHz NPU with only 32 MACs is more than 32 times faster than the 400MHzSTM32H7 when inferring MobileNet-Vl. Besides, the yielded NPUs can achieve roofline or even beyond roofline performance. The buffer and reshapeable MAC array push the NPU’s attainable performance to the roofline, while the feature of supporting sparsity allows the NPU to obtain performance beyond the roofline.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.