Abstract

Near Data Processing(NDP) techniques are introduced into deep learning accelerators as they can greatly relieve the pressure on memory bandwidth. Besides, approximate computing is also adopted in accelerating neural networks for the network fault-tolerance to reduce energy consumption. In this paper, an NDP accelerator with approximate computing features for LSTM is proposed to explore the data parallelism with reconfigurable features. Firstly, a hybrid-grained network partitioning model with scheduling strategy of LSTM is put forward to achieve high processing parallelism. Secondly, the approximate computing units are designed for LSTM with adaptive precision. Then the heterogeneous architecture, RNA, with reconfigurable computing arrays and approximate NDP units is proposed and implemented regarding the configuration code. The gates and cells in LSTM are modeled into fine-grained operations, organized in coarse-grained tasks, and then mapped onto RNA. In addition, approximate computing units are integrated into the NDP units with the adaptive precision, which is also controlled by the configuration codes. The proposed RNA architecture achieved 544 GOPS/W energy efficiency while processing LSTM, and further can be extended for larger and more complex recurrent neural networks. Comparing with the state-of-the-art accelerator for LSTM, it is 2.14 times better in efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call