Abstract

Krämer (Sankhy $$\bar{\mathrm{a }}$$ 42:130–131, 1980) posed the following problem: “Which are the $$\mathbf{y}$$ , given $$\mathbf{X}$$ and $$\mathbf{V}$$ , such that OLS and Gauss–Markov are equal?”. In other words, the problem aimed at identifying those vectors $$\mathbf{y}$$ for which the ordinary least squares (OLS) and Gauss–Markov estimates of the parameter vector $$\varvec{\beta }$$ coincide under the general Gauss–Markov model $$\mathbf{y} = \mathbf{X} \varvec{\beta } + \mathbf{u}$$ . The problem was later called a “twist” to Kruskal’s Theorem, which provides conditions necessary and sufficient for the OLS and Gauss–Markov estimates of $$\varvec{\beta }$$ to be equal. The present paper focuses on a similar problem to the one posed by Krämer in the aforementioned paper. However, instead of the estimation of $$\varvec{\beta }$$ , we consider the estimation of the systematic part $$\mathbf{X} \varvec{\beta }$$ , which is a natural consequence of relaxing the assumption that $$\mathbf{X}$$ and $$\mathbf{V}$$ are of full (column) rank made by Krämer. Further results, dealing with the Euclidean distance between the best linear unbiased estimator (BLUE) and the ordinary least squares estimator (OLSE) of $$\mathbf{X} \varvec{\beta }$$ , as well as with an equality between BLUE and OLSE are also provided. The calculations are mostly based on a joint partitioned representation of a pair of orthogonal projectors.

Highlights

  • Let us consider the general Gauss–Markov model y = Xβ + u, (1)where y is an n × 1 observable random vector, X is a known n × p model matrix, β is a p × 1 vector of unknown parameters, and u is an n × 1 random error vector

  • As in Krämer (1980), we consider the problem of identifying those observation vectors y which yield the same value of ordinary least squares estimator (OLSE)(Xβ) and best linear unbiased estimator (BLUE)(Xβ)

  • Corollary 1 corresponds to Krämer’s (1980, Theorem), where the identity BLUE(β) = OLSE(β) is explored under the assumption that X and V are of full rank

Read more

Summary

Introduction

Where y is an n × 1 observable random vector, X is a known n × p model matrix, β is a p × 1 vector of unknown parameters, and u is an n × 1 random error vector. The problem aimed at identifying those vectors y for which the OLS and Gauss–Markov estimates of the parameter vector β coincide Referring to this problem, in a follow-up paper Krämer et al (1996) called this a “twist” to Kruskal’s. Where PVM = VM(VM)† is the orthogonal projector onto R(VM) It was pointed out in Baksalary and Trenkler (2012, Remark 3.1) that Eq (3) always has a solution G and that each G satisfying (3) yields a representation of the best linear unbiased estimator BLUE(Xβ) of Xβ. All these representations coincide; see Groß (2004, Corollary 3).

Representations of BLUE and OLSE
Another twist
Bounds for the Euclidean distance
Equality of BLUE and OLSE

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.