Abstract

Abstract : Maximum likelihood (ML) estimation is a popular approach in solving many signal processing problems. Many of these problems cannot be solved analytically and so numerical techniques such as the method of scoring are applied. However, in many scenarios, it is desirable to modify the ML problem with the inclusion of additional side information. Often this side information is in the form of parametric constraints which the ML estimate (MLE) must now satisfy. We examine the asymptotic normality of the constrained ML (CML) problem and show that it is still consistent as well as asymptotically efficient (with respect to the constrained Cramer-Rao bound). We also generalize the method of scoring to include the constraints, and satisfy the constraints after each iterate. Convergence properties and examples verify the usefulness of the constrained scoring approach. As a particular example, an alternative and more general CML estimator is developed for the linear model with linear constraints.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call