Abstract

It is well known that appropriately biasing an estimator can potentially lead to a lower mean square error (MSE) than the achievable MSE within the class of unbiased estimators. Nevertheless, the choice of an appropriate bias is generally unclear and only recently there have been attempts to systematize such a selection. These systematic approaches aim at introducing MSE bounds that are lower than the unbiased Cramér–Rao bound (CRB) for all values of the unknown parameters and at choosing biased estimators that beat the standard maximum-likelihood (ML) and/or least squares (LS) estimators in the finite sample case. In this paper, we take these approaches one step further and investigate the same problem from the aspect of an end-performance metric different than the classical MSE. This study is motivated by recent advances in the area of system identification indicating that the optimal experiment design should be done by taking into account the end-performance metric of interest and not by quantifying a quadratic distance of the unknown model from the true one.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call