Abstract

One frequently used approach to the analysis of data in experimental design models when some of the observations are missing is to estimate the missing pieces of data and then to proceed with the analysis, making adjustments to take into account the estimation. [See, e.g., Cochran and Cox (1957) ]. The standard procedure for estimating the missing values is to minimize the residual sum of squares. It has been pointed out [e.g., by Jaech (1966)] that this procedure is equivalent to choosing the missing values to make the model fit perfectly at those points, i.e., to make the corresponding residuals equal to zero. The purpose of the present note is to discuss the criterion of minimum residual sum of squares and to prove the following Theorem. Estimation of missing values in a linear statistical model by minimization of the residual sum of squares is equivalent to setting the corresponding residuals equal to zero. Jaech (1966) proved this theorem for the special case of one missing value. There the exposition is in terms of scalar notation and at one point there is an assumption that the design matrix is of full rank. The proof to be given here is for the case of one or more missing values and avoids the assumption that the design matrix is of full rank. The mathematics involved is shortened and clarified by the use of matrix notation and the notion of projections.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call