Abstract

In this paper, we introduce new, more efficient, methods for training recurrent neural networks (RNNs) for system identification and Model Reference Control (MRC). These methods are based on a new understanding of the error surfaces of RNNs that has been developed in recent years. These error surfaces contain spurious valleys that disrupt the search for global minima. The spurious valleys are caused by instabilities in the networks, which become more pronounced with increased prediction horizons. The new methods described in this paper increase the prediction horizons in a principled way that enables the search algorithms to avoid the spurious valleys.The work also presents a novelty sampling method for collecting new data wisely. A clustering method determines when an RNN is extrapolating, which occurs when the RNN operates outside the region spanned by the training set, where adequate performance cannot be guaranteed. The new method presented in this paper uses a clustering method for extrapolation detection, and then the novel data is added to the original training set. The network performance is improved when additional training is performed with the augmented data set.The new techniques are applied to the model reference control of a magnetic levitation system. The techniques are tested on both simulated and experimental versions of the system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call