Abstract

Introduction. The classical linear estimation problem for a finite number of parameters using least squares dates back to Gauss [1]. In a paper by Aitken [2], the method of parameter estimation was generalized. Instead of obtaining the set of parameters which minimize the sum of squares of the residuals (difference between the observed and expected values), a quadratic form of the residuals was minimized. The quadratic form consisted of the product of the residual vectors times the inverse covariance matrix of the observational errors. The estimator so obtained is known as the Markov estimator, minimum variance estimator, or the best linear unbiased estimator. The estimator is optimal in the following sense. If the random errors of observation are jointly normally distributed then the Markov estimator is also the maximum likelihood estimator. Further, even if the random errors are not jointly normally distributed, the covariance matrix of the best linear unbiased estimator (B.L.U.) is equal to or less than the covariance matrix of any other linear unbiased estimator [3]. A closely related theory of linear smoothing and prediction has been developed beginning with the work of Wiener [4] and Kolmogorov [5]. In recent years application of the classical linear smoothing and estimation problems has led to computational difficulties because of the need to handle large amounts of data or perform on line or near real-time parameter estimations. Attempts to lighten the computational load has led to the development of recursive solutions whereby it is not necessary to store all previous data, but only the previous parameter estiinates and some recent data. Blum [6] considers a problem equivalent to estimating the coefficients of a least squares polynomial curve fit by recursive means which is suitable for real-time smoothing and prediction. The solution permits a substantial reduction in the storage and arithmetic operations when the number of data points is much greater than the degree of the polynomial. In [7] Blum considers the design of a smoothing filter which is not derived by least squares or minimum variance criteria. Again the solution is equivalent to a polynomial curve fitting procedure but the output of the filter may be biased. The advantage gained is the simple difference equation between the present output and the present input and past output values which is computationally simple to implement.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call