Abstract

It should be of more than casual interest to statistics teachers, practitioners, and theorists-and to computer specialists also-that the ritual of forming and solving summary normal equations is a deferrable, delegable, and even dispensable feature of least-squares estimation. Without performing any prior arithmetical operations on the data at all, we may instantly organize observations, weights, and the assumptions regarding residual aggregates into a supermatrix system that is mathematically equivalent to the conventional normal equations. The residuals are incorporated in this enlarged system as explicit unknowns, but solution for them is unnecessary. The conventional normal equations do not have to be constructed in the course of processing the supermatrix system; actually, they enter the system in light disguise at the outset, as conditions involving residual aggregates. In the remainder of this paper, the supermatrix approach is outlined briefly and illustrated for a familiar class of cases: n observations are to be harmonized for a linear function in which the dependent variable alone is subject to error ancl the in unknown constanits (n > in) are to be estimated. The exposition takes advantage of earlier contributions of the author at annual meetings of the American Sta-tistical Association [1, 2, 3] and elsewhere [4, 5]. The general linear -function may be written conveniently in matrix form and, for our purposes, more exactly too-as Y = Xb + e = Xb + le. (This equation is really an identity.) Y is the dependent variable (n X 1). X is the ma-trix of fixed coefficients (71 X in) of b, the vector of unknown constants to be determined (in X 1). I is an identity matrix (n X n) that contains the coefficients of the residuals (unity) in its diagonal. The vector of residuals is e (n X 1). In the supermatrix approach, as we see below, b and e are combined into a supervector of unknowns. Let us suppose, first, that the observations are equally weighted. For -this subclass of cases, the matrix normal equation that provides the least-squares estimates for b is known to be X'Y = X'Xb. Accordingly, we ask: What is the simplest companion equation that can be solved simultaneously with Y Xb + le for the vectors b and e and that yields the required matrix normal equation upon elimination of e? The answer is X'e = 0, where X' is the transpose of X; it is another way of expressing the usual normal equations. Stacking the model and companion matrix equations into a simultaneous-or supermatrix-system, we have:

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call