Abstract

The large number of markers in genome-wide prediction demands the use of methods with regularization and model comparison based on some hold-out test prediction error measure. In quantitative genetics, it is common practice to calculate the Pearson correlation coefficient (r2) as a standardized measure of the predictive accuracy of a model. Based on arguments from the bias–variance trade-off theory in statistical learning, we show that shrinkage of the regression coefficients (i.e., QTL effects) reduces the prediction mean squared error (MSE) by introducing model bias compared with the ordinary least squares method. We also show that the LASSO and the adaptive LASSO (ALASSO) can reduce the model bias and prediction MSE by adding model variance. In an application of ridge regression, the LASSO and ALASSO to a simulated example based on results for 9,723 SNPs and 3,226 individuals, the best model selected was with the LASSO when r2 was used as a measure. However, when model selection was based on test MSE and coefficient of determination R2 the ALASSO proved to be the best method. Hence, use of r2 may lead to selection of the wrong model and therefore also nonoptimal ranking of phenotype predictions and genomic breeding values. Instead, we propose use of the test MSE for model selection and R2 as a standardized measure of the accuracy.

Highlights

  • At the heart of classical quantitative genetics is linear model theory (Lynch and Walsh, 1998)

  • If p < < n, we can set up the linear model y = X β + e where each individual genotype score (0,1, or 2) is collected in a matrix X and the corresponding phenotypes in a vector y, and use standard ordinary least squares (OLS) to obtain unbiased solutions to the regression coefficients of the genetic markers, i.e., βOLS = (XT X)‐1 y

  • Compared with OLS, the LASSO and RR can yield a reduction in variance at the expense of some increase in bias, and generate lower mean squared error (MSE) and better prediction accuracy (Hastie et al, 2009)

Read more

Summary

Introduction

At the heart of classical quantitative genetics is linear model theory (Lynch and Walsh, 1998). If p < < n, we can set up the linear model y = X β + e where each individual genotype score (0,1, or 2) is collected in a matrix X (standardized over columns to have mean equal to zero and variance equal to one) and the corresponding phenotypes in a vector y (centered to have a mean of zero), and use standard OLS to obtain unbiased solutions to the regression coefficients of the genetic markers, i.e., βOLS = (XT X)‐1 y.

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call