Abstract
Sparse least mean square (LMS) algorithms employ approximations of sparseness constraints as a zero-point attraction term that forces small tap weights towards the origin when unknown systems to be identified are sparse. Recently, the online linearized Bregman iteration (OLBI) algorithm appreciated soft thresholding techniques based on an $L_{1}$ -norm regularization in reducing a steady-state error. Although the soft thresholding successfully improves accuracy of the adaptive filter for sparse systems, this brief is limited to the $L_{1}$ -norm regularization. In sparse representation, the $L_{0}$ -norm regularization can theoretically yield the sparsest representation and lead to the promising performance in adaptive filters. In this regard, we introduce a $L_{0}$ -norm based LMS algorithm by exploiting a hard thresholding through a variable splitting method. The proposed algorithm preserves the behavior of large tap weights and strongly enforces small tap weights to zero by relaxation of $L_{0}$ -norm regularization. We also provide the mean stability conditions and theoretical mean-square performance of the proposed algorithm. Experimental results show that the proposed algorithm achieves superior convergence performance compared with conventional sparse algorithms.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have