Abstract

Recurrent neural networks (RNN) model time series by feeding back the representation from the previous time instant as an input for the current instant along with exogenous inputs. Two main shortcomings of RNN are – 1. The problem of vanishing gradients while backpropagating through time, and 2. Inability to learn in an unsupervised manner. Variants like long-short term memory (LSTM) network and gated recurrent units (GRU) have partially circumvented the first issue; the second issue still remains. In this work we propose a new variant of RNN based on the transform learning model — named recurrent transform learning (RTL). It can learn in an unsupervised, supervised and semi-supervised fashion; it does not require backpropagation and hence do not suffer from the pitfalls of vanishing gradients. The proposed model is applied on a real-life example of short-term load forecasting, where we show that RTL improves over existing variants of RNN as well as on a state-of-the-art technique in load forecasting based on sparse coding.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.