Abstract

This paper addresses the reduced-dimension Wiener filter based on perpendicular partition. Two algorithms, i.e., cyclic per-weight (CPW) and batch CPW (BCPW) are presented, which can efficiently perform reduced-dimension adaptive beamforming. In particular, the CPW is suitable for the slow changing environments that have large snapshots to train the weight vector. The BCPW is designed for fast changing scenarios that have low training sequences and sometimes the number of snapshots is less than the number of sensors. In these two algorithms, the solutions of the weight vectors are circularly solved one by one, which only refer to scalar optimization problem then the matrix inversion is avoided. There is no intermediate transformation or orthogonal/non-orthogonal decomposition requirement in our algorithms, so their implementations are relatively simple. Convergence analysis and the computational complexities comparison are provided. Simulation results show the superiorities of the proposed algorithms. Specifically, the CPW has better convergence properties than previous schemes, and the BCPW has low computational complexity and good output SINR performance in low snapshots scenarios.

Highlights

  • The array beamforming technique has been extensively used in wireless communications, radar, sonar, microphone array, and so on [1]–[5]

  • Normalized least mean square (LMS) (NLMS) algorithm employs the variable step sizes to improve the convergence speed, but it still depends on the constant step size which always degrades the performance of beamformer

  • THE PROPOSED ALGORITHMS we devise a perpendicular partition Wiener filter, and we present two reduced-dimension adaptive beamforming algorithms, i.e., the cyclic per-weight (CPW) and the batch CPW (BCPW)

Read more

Summary

Introduction

The array beamforming technique has been extensively used in wireless communications, radar, sonar, microphone array, and so on [1]–[5]. There are numerous techniques to implement the adaptive beamforming [6], [7] such as the sample matrix inversion (SMI) [8], the least mean square (LMS) [4], [9], [10] and the recursive least squares (RLS) methods [11]–[13]. The work of [4] developed another efficient variable step size mechanism, i.e., the shrinkage linear LMS (SL-LMS) algorithm. It exploits the relationship between the posteriori and priori error signals, so it obviously enhances the convergence rate and decreases the miss-adjustment.

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.