Abstract

The paper aims at contrasting two different ways of incorporating a priori information in parameter estimation, i.e., hard-constrained and soft-constrained estimation. Hard-constrained estimation can be interpreted, in the Bayesian framework, as maximum a posteriori probability (MAP) estimation with uniform prior distribution over the constraining set, and amounts to a constrained least-squares (LS) optimization. Novel analytical results on the statistics of the hard-constrained estimator are presented for a linear regression model subject to lower and upper bounds on a single parameter. This analysis allows to quantify the mean squared error (MSE) reduction implied by constraints and to see how this depends on the size of the constraining set compared with the confidence regions of the unconstrained estimator. Contrastingly, soft-constrained estimation can be regarded as MAP estimation with Gaussian prior distribution and amounts to a less computationally demanding unconstrained LS optimization with a cost suitably modified by the mean and covariance of the Gaussian distribution. Results on the design of the prior covariance of the soft-constrained estimator for optimal MSE performance are also given. Finally, a practical case-study concerning a line fitting estimation problem is presented in order to validate the theoretical results derived in the paper as well as to compare the performance of the hard-constrained and soft-constrained approaches under different settings

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call