Abstract

In any estimation problem, there is always a need to find the bias and mean square error (MSE) of an estimator. These values are then compared against their sample averages obtained from simulation to confirm the theoretical development, and/or the Cram?r-Rao lower bound (CRLB) [1] to assess the optimality of the estimator. When the estimator is a nonlinear function of the measurements, it is rather difficult to derive exact expressions for the bias and MSE. Based on Taylor series expansion (TSE) of the estimator cost function near the true value, [2] provides a generic approximation for these performance measures. In [3], equations for bias and variance are obtained by a direct TSE of the estimator function. Their difference is that [2] is a TSE of the estimator cost function, while [3] is a TSE of the estimator itself. We shall review the bias and MSE formulas obtained from these two approaches, provide several representative application examples, and compare their results. It will be explained that for linear parameter estimation problems, both techniques give identical and exact bias and MSE expressions. However, the former has a wider applicability over the latter for nonlinear estimation, particularly when the estimate is not an explicit function of the measurements.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.