The classical multiple signal classification (MUSIC) algorithms mainly have two limitations. One is an insufficient number of snapshots, which usually causes an ill-posed sample covariance matrix in many real applications. The other limitation is the intense space-colored and time-white noise, which also breaks the separability between signal and noise subspaces. In the case of the insufficient sample, there are few signal components in the non-zero delay sample covariance matrix (SCM), where the space-colored and time-white noise components are suppressed by the temporal method. A set of non-zero delay sample covariance matrices are constructed, and a nonlinear object function is formulated. Hence, the sufficient non-zero delay SCMs ensure that enough signal components are used for signal subspace estimation. Then, the constrained optimization problem is converted into an unconstrained one by exploiting the Lagrange multiplier method. The nonlinear equation is solved by Newton’s method iteratively. Moreover, a proper initial value of the new algorithm is given, which can improve the convergence of the iterative algorithm. In this paper, the noise subspace is removed by the pre-projection technique in every iteration step. Then, an improved signal subspace is obtained, and a more efficient MUSIC algorithm is proposed. Experimental results show that the proposed algorithm achieves significantly better performance than the existing methods.
Read full abstract