Abstract

This letter gives formulas to compute the Jacobian and Hessian of an estimator that can be written as the maximum of a given scoring function, which includes the important cases of maximum likelihood (ML) and least squares (LS) estimation. We use the knowledge about these derivatives to compute two approximations of the estimator risk and show that the linear risk approximation of an ML estimator coincides with the Cramer-Rao bound for the case of a Gaussian signal model where the underlying loss function that is used for the risk computation is the squared error loss.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call