Abstract
We present an approach for selecting optimal parameters for the pipelined recurrent neural network (PRNN) in the paradigm of nonlinear and nonstationary signal prediction. Although there has recently been progress in terms of algorithms for training the PRNN, no account has been made of some inherent features of the PRNN. We therefore provide a study of the role of nesting, which is inherent to the PRNN architecture. The corresponding number of nested modules needed for a certain prediction task, and their contribution toward the final prediction gain (PG) give a thorough insight into the way the PRNN performs, and offers solutions for optimization of its parameters. In particular, nesting, which is a contractive function by its nature, allows the forgetting factor in the cost function of the PRNN to exceed unity, hence it becomes an emphasis factor. This compensates for the small contribution of the distant modules to the prediction process, due to nesting, and helps to circumvent the problem of vanishing gradient, experienced in RNN's for prediction. The PRNN, with its parameters chosen based upon the established criteria, is shown to outperform the linear least mean square (LMS) and recursive least squares (RLS) predictors, as well as previously proposed PRNN schemes, at no expense of additional computational complexity.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.