Abstract

In this paper, a self-recovery technique of feedforward neural networks called weight shifting and its analytical models are proposed. The technique is applied to recover a network when some faulty links and/or neurons occur during the operation. If some input links of a specific neuron are detected faulty, their weights will be shifted to healthy links of the same neuron. On the other hand, if a faulty neuron is encountered, then we can treat it as a special case of faulty links by considering all the output links of that neuron to be faulty. The aim of this technique is to recover the network in a short time without any retraining and hardware repair. We also propose the hardware architecture for implementing this technique.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.