Abstract

When we have prior information of approximate and subjective nature other than those contained in the time series, that is knowledge not directly referable to exact linear constraints, we propose to use the penalty function method to tackle the problem of estimation of a linear statistical model alternatively to the mixed estimation proposed by Theil and Goldberger. This method bases itself on the minimization of a mathematical function known as penalty function where constraint expressed in a quadratic form is linearly combined with an objective function by some coefficients known as penalty parameters. The particular functional form utilized for the constraint can be seen as a way to catch all the uncertainty included in this type of prior knowledge. The algorithm that derives from the application of this method depends on the penalty parameters that can be interpreted in two different ways: a) as weights assigned to the constraint in the minimization procedure. In this case the result can be seen as a generalization of the ridge estimator. b) as variances of a stochastic constraint. By this way the algorithm is similar to the mixed estimator but, unlike this one, it has not unknown parameters in its formulation and so it can be seen as a feasible estimator. Whatever the interpretation of this parameters is, the penalty function method produces an algorithm that is a flexible instrument both from a theoretical viewpoint and a practical one.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.