Abstract

We consider the general linear model \(\mathbf{y} = \mathbf{X} {\pmb {\beta }}+ {\pmb {\varepsilon }}\), denoted as \( \mathscr {M} = \{ \mathbf{y}, \mathbf{X}{\pmb {\beta }}, \mathbf{V} \}\), supplemented with the new unobservable random vector \(\mathbf{y}_*\), coming from \(\mathbf{y}_* = \mathbf{X}_*{\pmb {\beta }}+ {\pmb {\varepsilon }}_*\), where the covariance matrix of \(\mathbf{y}_*\) is known as well as the cross-covariance matrix between \(\mathbf{y}_*\) and \(\mathbf{y}\). A linear statistic \(\mathbf{F} \mathbf{y}\) is called linearly sufficient for \(\mathbf{X}_* {\pmb {\beta }}\) if there exists a matrix \(\mathbf{A}\) such that \(\mathbf{A} \mathbf{F} \mathbf{y}\) is the best linear unbiased estimator, BLUE, for \(\mathbf{X}_* {\pmb {\beta }}\). The concept of linear sufficiency with respect to a predictable random vector is defined in the corresponding way but considering the best linear unbiased predictor, BLUP instead of BLUE. In this paper, we consider the linear sufficiency of \(\mathbf{F}\mathbf{y}\) with respect to \( \mathbf{y}_*\), \(\mathbf{X}_* {\pmb {\beta }}\), and \({\pmb {\varepsilon }}_*\). We also apply our results into the linear mixed model. The concept of linear sufficiency was essentially introduced in early 1980s by Baksalary, Kala, and Drygas. Recently, several papers providing further properties of the linear sufficiency have been published by the present authors. Our aim is to provide an easy-to-read review of recent results and while doing that, we go through some basic concepts related to linear sufficiency. As a review paper, we do not provide many proofs, instead our goal is to explain and clarify the central results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call