Abstract

In this paper, four different on-line gradient-based learning algorithms for training neural network Hammerstein models are presented in a unified framework. These algorithms, namely the backpropagation for series–parallel models, the backpropagation, sensitivity method, and truncated backpropagation through time algorithm (BPTT) for parallel models are derived, analysed, and compared. For the truncated BPTT, it is shown that determination of the number of unfolding time steps, necessary to calculate the gradient with an assumed degree of accuracy, can be made on the basis of impulse response functions of sensitivity models. The algorithms are shown to differ in their computational complexity, gradient approximation accuracy, and convergence rates. Numerical examples are also included to compare the performance of the algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.