Abstract

This paper presents an extensive study of fault tolerant training of feedforward artificial neural networks. We present several versions of a very robust training algorithm and report the results of their simulations. Our algorithm is shown to outperform all existing training algorithms in its ability to tolerate different fault types and larger number of hidden unit failures. We show that the generalization ability of the proposed algorithm is substantially better than that of the standard backpropagation algorithm and is comparable with that of other existing fault tolerant algorithms. The algorithm is based on the backpropagation algorithm with built-in measures for extensive fault tolerant training. A novel concept presented in this paper is that of training the network for fault types beyond the limits of the activation function. We demonstrate that training for such unrealistic fault types enables the network to be more tolerant to realistic fault types within the limits of the activation function. Further, tradeoffs between training time, enhanced fault tolerance, and generalization properties are studied. © 1997 Elsevier Science Ltd. All Rights Reserved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call