Abstract

Least-Mean Square algorithm, or simply the LMS, is generally one of the most commonly used techniques for optimized solution in adaptive schemes especially in the Gaussian environment. However, least mean fourth (LMF) algorithm and its variants, such as normalized LMF (NLMF), perform better in the non-Gaussian environment. The conventional LMF algorithms usually diverge in non-Gaussian environment with dynamic input. Conventionally, regularization is archived by using a small constant compensation term in the denominator of the learning rate to protect the algorithm from divergence. This paper introduces an efficient time-varying regularized normalized least mean fourth (R-NLMF) algorithm. In the proposed algorithm, the regularization term is made time varying and gradient adaptive according to steepest descent approach. Thus the proposed algorithm adapts its learning rate according to the environment and the input signal dynamics. A similar approach, namely Generalized Normalized Gradient Descent (GNGD) algorithm, has been previously applied to the normalized least mean-square (NLMS) algorithm in the Gaussian environment. However, due to its dependence on NLMS, the GNGD algorithm performance degrades in non-Gaussian environment. In order to overcome this problem, an efficient regularized NLMF algorithm for non-Gaussian environment is proposed. The algorithm shows promising results and achieves faster convergence while maintaining lesser steady-state miss-adjustment. Simulations are carried out to support the theoretical development.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call