Abstract
Real-time recurrent learning (RTRL), commonly employed for training a fully connected recurrent neural network (RNN), has a drawback of slow convergence rate. In the light of this deficiency, a decision feedback recurrent neural equalizer (DFRNE) using the RTRL requires long training sequences to achieve good performance. In this paper, extended Kalman filter (EKF) algorithms based on the RTRL for the DFRNE are presented in state-space formulation of the system, in particular for complex-valued signal processing. The main features of global EKF and decoupled EKF algorithms are fast convergence and good tracking performance. Through nonlinear channel equalization, performance of the DFRNE with the EKF algorithms is evaluated and compared with that of the DFRNE with the RTRL.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have