Abstract

Maximum-likelihood (ML) estimation is a popular approach to solving many signal processing problems. Many of these problems cannot be solved analytically and so numerical techniques such as the method of scoring are applied. However, in many scenarios, it is desirable to modify the ML problem with the inclusion of additional side information. Often this side information is in the form of parametric constraints, which the ML estimate (MLE) must now satisfy. We unify the asymptotic constrained ML (CML) theory with the constrained Cramer-Rao bound (CCRB) theory by showing the CML estimate (CMLE) is asymptotically efficient with respect to the CCRB. We also generalize the classical method of scoring using the CCRB to include the constraints, satisfying the constraints after each iterate. Convergence properties and examples verify the usefulness of the constrained scoring approach. As a particular example, an alternative and more general CMLE is developed for the complex parameter linear model with linear constraints. A novel proof of the efficiency of this estimator is provided using the CCRB.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.