Abstract
In this paper, we introduce new, more efficient, methods for training recurrent neural networks (RNNs). These methods are based on a new understanding of the error surfaces of RNNs that has been developed in recent years. These error surfaces contain spurious valleys that disrupt the search for global minima. The spurious valleys are caused by instabilities in the networks, which become more pronounced with increased prediction horizons. The new methods described in this paper increase the prediction horizons in a principled way that enables the search algorithms to avoid the spurious valleys. The paper also presents a new method for determining when an RNN is extrapolating. When an RNN operates outside the region spanned by the training set, adequate performance cannot be guaranteed. The new method presented in this paper accurately predicts poor performance well before its onset.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.