Abstract

The memristor crossbar has the characteristic of high parallelism in implementing the matrix vector multiplication which can speed up the computation of the neural network. However, faulty memristors resulted from the hardware defects significantly degrade the classification accuracy of neural networks deployed onto the crossbar. Weight mapping is a type of low cost fault tolerant scheme. Unfortunately, the existing schemes usually conduct fault aware weight mapping in the entire row granularity which constrains the optimization space for fault tolerance. To overcome above mentioned problem, in this paper, we propose the operational unit (OU) level weight mapping which further adjusts mapping of the weight blocks inside each OU after the row granularity weight mapping. Such strategy can implement fine-grained fault tolerance. Moreover, we unify the similar inputs for the different OUs in order to constrain the increase in the number of input vectors resulted from the OU level mapping adjustment. The simulation results demonstrate that with the defect rates of 5%, 10%, 15% and 20%, on average classification accuracies of the networks optimized by the proposed scheme are improved by 1.165%, 4.38%, 10.73% and 12.58% compared to the ones of the row level weight mapping. Moreover, on average numbers of the input vectors are reduced by 61.8%, 73.2%, 75.3% and 64.9%, compared with the OU level weight mapping without considering unification of the similar inputs.

Highlights

  • The deep neural network (DNN) achieves the state of art prediction accuracy at the cost of large weight storing space, millions of memory access and computation operations [1], [2]

  • The simulation results demonstrate that with the defect rates of 5%, 10%, 15% and 20%, on average classification accuracies of the networks optimized by the proposed scheme are improved by 1.165%, 4.38%, 10.73% and 12.58%, compared with the ones of the row level weight mapping

  • In this paper, we proposed an operational unit (OU) level weight mapping algorithm to implement fine-grained fault tolerance for the memristor crossbar based neural network accelerators in the presence of the hardware defects

Read more

Summary

INTRODUCTION

The deep neural network (DNN) achieves the state of art prediction accuracy at the cost of large weight storing space, millions of memory access and computation operations [1], [2]. An increase in the number of input vectors requires the larger storing space, and introduces timing overhead for switching the applied inputs on the word lines and degrades the computational performance of the crossbar To overcome this problem, we further propose to unify the similar input vectors among some OUs to constrain. An increase in the number of input vectors requires larger storing space, and degrades the calculation performance of the crossbar because the inputs need to be switched for different OUs. the third step of the proposed algorithm is to reduce the number of the input vectors by unifying similar inputs among some OUs. A threshold can be set on the accuracy loss to control the input unifying process

ROW LEVEL WEIGHT MAPPING
Output: weight matrix after OU level mapping
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.