Abstract
A class of data-reusing (DR) learning algorithms for real-valued recurrent neural networks (RNNs) employed as nonlinear adaptive filters is extended to the complex domain to give a class of data-reusing learning algorithms for complex valued recurrent neural networks (CRNNs). For rigour, the derivation of the data-reusing complex real time recurrent learning (DRCRTRL) algorithm is undertaken for a general complex activation function. The analysis provides both error bounds and convergence conditions for the case of contractive and expansive complex activation functions. The improved performance of the data–reusing algorithm over the standard one is verified by simulations on prediction of complex valued signals.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.