Abstract

High-resolution methods are generally defined to be high-performance methods for estimating and/or detecting the desired and/or undesired signal components present in a given set of data. The term “high-resolution” also implies a good ability to resolve very “similar” signal components. One of the most common problems in signal processing is known as frequency estimation. In frequency estimation, “high-resolution” often refers to a good ability to resolve two or more closely located frequencies in the given data. There are two groups of high-resolution methods. One is parametric methods, and the other non-parametric methods. The parametric high-resolution methods result from ingenious exploitations of known data structures. The non-parametric high-resolution methods maximize the output of some desired information with little knowledge of the data structure. The choice between parametric methods and non-parametric methods largely depends on one’s confidence in the assumed data model. In this chapter, we expose the readers to a range of existing parametric high-resolution methods. In Section 1.2, we present several frequency estimation techniques using algebraic principles. They are linear prediction, matrix pencil, and iterative quadratic maximum likelihood. The linear prediction method and the matrix pencil method can achieve near-optimal accuracy of estimation without the local convergence issues associated with the optimal methods. The computational complexities of the two methods are among the most efficient. The iterative quadratic maximum likelihood method is an approximation of the (exact) maximum likelihood method. Under some condition (e.g., high SNR), this approximation achieves the optimal accuracy. Concepts like forwardand-backward averaging, total least square, and (joint) singular value decomposition of orthonormal matrices will also be discussed. In Section 1.3, we present methods that exploit large sample theorems in statistics. In particular, we focus on data of multiple independent measurements. The key data structure is captured by the dominant (principal) subspace of the data matrix or the dominant eigenvectors of the data covariance matrix. The principal subspace is referred

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call