Abstract
The back propagation BP algorithm has been very successful in training multilayer perceptron-based equalisers; despite its success BP convergence is still too slow. Within this paper we present a new approach to enhance the training efficiency of the multilayer perceptron-based equaliser MLPE. Our approach consists on modifying the conventional back propagation algorithm, through creating an adaptive nonlinearity in the activation function. Experiment results evaluates the performance of the MLPE trained using the conventional BP and the improved back propagation with adaptive gain IBPAG. Due to the adaptability of the activation function gain the nonlinear capacity and flexibility of the MLP is enhanced significantly. Therefore, the convergence properties of the proposed algorithm are more improved compared to the BP. The proposed algorithm achieves the best performance in the entire simulation experiments.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have