Abstract
DRAM devices require periodic refresh operations to preserve data integrity. Slowing down the refresh rate can reduce the energy consumption; however, it may cause a loss of data stored in the DRAM cell. This paper proposes a new memory architecture of soft approximation for deep learning applications, which reduces the refresh energy consumption while maintaining accuracy and high performance. Utilizing the error-tolerant property of deep learning applications, the proposed memory architecture avoids the accuracy drop caused by data loss by flexibly controlling the refresh operation for different bits, depending on their criticality. For data storage, the approximate DRAM architecture reorganizes the data so that these data are sorted according to their bit significance. Critical bits are stored in more frequently refreshed devices while non-critical bits are stored in less frequently refreshed devices. In addition, for further reduction of the DRAM energy consumption, this paper combines hard approximation, which reduces the number of accesses to DRAM, with soft approximation. Simulation results show that the refresh energy consumption is reduced by 69.71%, and the total energy consumption is reduced by 26.0 % for the hybrid memory with a negligible drop in both training and testing phases on state-of-the-art deep networks.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have