Abstract

A class of least squares problems that arises in linear Bayesian estimation is analyzed. The data vector ${\bf y}$ is given by the model ${\bf y} = {\bf P}({\bf H}\bm{\theta} + \bm{\eta}) + {\bf w}$, where ${\bf H}$ is a known matrix, while $\bm{\theta}$, $\bm{\eta}$, and ${\bf w}$ are uncorrelated random vectors. The goal is to obtain the best estimate for $\bm{\theta}$ from the measured data. Applications of this estimation problem arise in multisensor data fusion problems and in wireless communication. The unknown matrix ${\bf P}$ is chosen to minimize the expected mean-squared error ${\bf E}(\|\bm{\theta} - \hat{\bm{\theta}}\|^2)$ subject to a power constraint “trace $({\bf P}{\bf P}^*) \le P$,” where $\hat{\bm{\theta}}$ is the best affine estimate of $\bm{\theta}$. Earlier work characterized an optimal ${\bf P}$ in the case where the noise term $\bm{\eta}$ vanished, while this paper analyzes the effect of $\bm{\eta}$, assuming its covariance is a multiple of ${\bf I}$. The singular value decomposition of an optimal ${\bf P}$ is expressed in the form ${\bf V}\bm{\Sigma}\bm{\Pi}{\bf U}^*$ where ${\bf V}$ and ${\bf U}$ are unitary matrices related to the covariance of either $\bm{\theta}$ or ${\bf w}$, and singular vectors of ${\bf H}$, $\bm{\Sigma}$ is diagonal, and $\bm{\Pi}$ is a permutation matrix. The analysis is carried out in two special cases: (i) ${\bf H} = {\bf I}$ and (ii) covariance of $\bm{\theta}$ is ${\bf I}$. In case (i), $\bm{\Pi}$ does not depend on the power P. In case (ii), $\bm{\Pi}$ generally depends on P. The optimal $\bm{\Pi}$ is determined in the limit as the power tends to zero or infinity; a good approximation to an optimal $\bm{\Pi}$ is found for general P.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call