Abstract

Near Data Processing(NDP) techniques are introduced into deep learning accelerators as they can greatly relieve the pressure on memory bandwidth. Besides, approximate computing is also adopted in accelerating neural networks for the network fault-tolerance to reduce energy consumption. In this paper, an NDP accelerator with approximate computing features for LSTM is proposed to explore the data parallelism with reconfigurable features. Firstly, a hybrid-grained network partitioning model with scheduling strategy of LSTM is put forward to achieve high processing parallelism. Secondly, the approximate computing units are designed for LSTM with adaptive precision. Then the heterogeneous architecture, RNA, with reconfigurable computing arrays and approximate NDP units is proposed and implemented regarding the configuration code. The gates and cells in LSTM are modeled into fine-grained operations, organized in coarse-grained tasks, and then mapped onto RNA. In addition, approximate computing units are integrated into the NDP units with the adaptive precision, which is also controlled by the configuration codes. The proposed RNA architecture achieved 544 GOPS/W energy efficiency while processing LSTM, and further can be extended for larger and more complex recurrent neural networks. Comparing with the state-of-the-art accelerator for LSTM, it is 2.14 times better in efficiency.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.