Abstract

Resistive random access memory (RRAM) is a promising technology for energy-efficient in-memory computing. However, due to technology limits, RRAM device faces a series of reliability issues. Deep neural network (DNN) computing based on RRAM suffers from accuracy degradation. On the one hand, offline DNN training solutions are difficult to fully consider and simulate all nonidealities. Worse still, new error or nonideality may come up with the usage of RRAM, which further deteriorates the effectiveness of offline training. On the other hand, online training poses great challenges on programming overhead and device lifetime. The iterative write-verify technique to program multi-bit RRAM cells prolongs write latency more than 10× longer than read latency. To overcome these issues, we propose a compensation architecture and training designs to mitigate the realistic accuracy loss in RRAM chips with negligible hardware resource for RRAM-based DNN computing. Firstly, we add trainable compensation channels in crossbars utilizing the residual resource after original weight mapping. Secondly, an offline training procedure with computing output from hardware is triggered to settle down appropriate weight value in compensation channels. Experimental results demonstrate that the proposed design can guarantee ≤ 0.8% loss of accuracy in DNN even when nonidealities reduce the original accuracy down to ≤73%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call