Abstract

Today, Linear Mixed Models (LMMs) are fitted, mostly, by assuming that random effects and errors have Gaussian distributions, therefore using Maximum Likelihood (ML) or REML estimation. However, for many data sets, that double assumption is unlikely to hold, particularly for the random effects, a crucial component in which assessment of magnitude is key in such modeling. Alternative fitting methods not relying on that assumption (as ANOVA ones and Rao’s MINQUE) apply, quite often, only to the very constrained class of variance components models. In this paper, a new computationally feasible estimation methodology is designed, first for the widely used class of 2-level (or longitudinal) LMMs with only assumption (beyond the usual basic ones) that residual errors are uncorrelated and homoscedastic, with no distributional assumption imposed on the random effects. A major asset of this new approach is that it yields nonnegative variance estimates and covariance matrices estimates which are symmetric and, at least, positive semi-definite. Furthermore, it is shown that when the LMM is, indeed, Gaussian, this new methodology differs from ML just through a slight variation in the denominator of the residual variance estimate. The new methodology actually generalizes to LMMs a well known nonparametric fitting procedure for standard Linear Models. Finally, the methodology is also extended to ANOVA LMMs, generalizing an old method by Henderson for ML estimation in such models under normality.

Highlights

  • It is shown that when the Linear Mixed Models (LMMs) is, Gaussian, this new methodology differs from Maximum Likelihood (ML) just through a slight variation in the denominator of the residual variance estimate

  • It has been difficult to routinely fit LMMs without assuming both random effects and residual errors to have Gaussian distributions

  • For many data sets, that assumption may be debatable, especially for the random effects. This is disturbing since modeling of random effects behavior is one of the main goals of LMM fitting in the first place

Read more

Summary

Introduction

We present supplementary materials useful for the understanding of the article. Errors and diagonal covariance matrix for the cluster random effects, and ANOVA LMMs are presented. If the Cholesky software routine declares the newly computed D (t) at an iteration t of not being full rank, the iterative algorithm is stopped, informing the user why: the true cluster random effects covariance matrix D is either singular or close to such a matrix. After that premature termination (and as for the case of singular X T X previously), the algorithm returns the estimated rank of D (t) and the indices of its most uncorrelated columns Using these pieces of information as estimates of the corresponding features of the true (but unknown) D , the user can re-launch the 3S iterative algorithm, but with cluster random effects vectors. Errors, assuming a diagonal covariance matrix for the cluster random effects vector, that is: They carry respectively the code names 3S-A1-V1-d and 3S-A1-V2-d.

Linear Mixed Models
Prediction of Random Effects
Henderson’s Mixed Model Equations
An Important Preliminary
More about the HMMEs Solutions
T Z β X TY
Starting Ideas
The Estimating Equations
Relationship with the HMMEs
Numerical Examples
A Simulation Study
Two Real World Data Sets
Application to the Cake Data
Application to the Blackmore Longitudinal Data
Concluding Remarks

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.