Abstract
In order to achieve fast convergence and less computation for adaptive filters, a joint method combining a whitening process and the NLMS is a hopeful approach. However, updating the filter coefficients is not with the reflection coefficient updating, resulting in unstable behavior. We analyze the effects of this, and propose the synchronized learning algorithm to solve this problem. The synchronous error between them is removed, and fast convergence and small residual error are obtained. This algorithm, however, requires O(ML) computations, where M is an adaptive filter length, and L is a lattice predictor length. It is still large compared with the NLMS algorithm. In order to achieve less computation while the fast convergence is maintained, a block implementation method is proposed. The reflection coefficients are updated at some period, and are fixed during this interval. The proposed block implementation can be effectively applied to parallel form adaptive filters, such as sub-band adaptive filters. Simulation using speech signals shows that the learning curve of the proposed block implementation is a little slower than our original algorithm, but can save computational complexity.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.