Abstract

BackgroundIt becomes clear that the increase in the density of marker panels and even the use of sequence data didn’t result in any meaningful increase in the accuracy of genomic selection (GS) using either regression (RM) or variance component (VC) approaches. This is in part due to the limitations of current methods. Association model are well over-parameterized and suffer from severe co-linearity and lack of statistical power. Even when the variant effects are not directly estimated using VC based approaches, the genomic relationships didn’t improve after the marker density exceeded a certain threshold. SNP prioritization-based fixation index (FST) scores were used to track the majority of significant QTL and to reduce the dimensionality of the association model.ResultsTwo populations with average LD between adjacent markers of 0.3 (P1) and 0.7 (P2) were simulated. In both populations, the genomic data consisted of 400 K SNP markers distributed on 10 chromosomes. The density of simulated genomic data mimics roughly 1.2 million SNP markers in the bovine genome. The genomic relationship matrix (G) was calculated for each set of selected SNPs based on their FST score and similar numbers of SNPs were selected randomly for comparison. Using all 400 K SNPs, 46% of the off-diagonal elements (OD) were between − 0.01 and 0.01. The same portion was 31, 23 and 16% when 80 K, 40 K and 20 K SNPs were selected based on FST scores. For randomly selected 20 K SNP subsets, around 33% of the OD fell within the same range. Genomic similarity computed using SNPs selected based on FST scores was always higher than using the same number of SNPs selected randomly. Maximum accuracies of 0.741 and 0.828 were achieved when 20 and 10 K SNPs were selected based on FST scores in P1 and P2, respectively.ConclusionsGenomic similarity could be maximized by the decrease in the number of selected SNPs, but it also leads to a decrease in the percentage of genetic variation explained by the selected markers. Finding the balance between these two parameters could optimize the accuracy of GS in the presence of high density marker panels.

Highlights

  • It becomes clear that the increase in the density of marker panels and even the use of sequence data didn’t result in any meaningful increase in the accuracy of genomic selection (GS) using either regression (RM) or variance component (VC) approaches

  • The distribution and effects of simulated 200 quantitative trait loci (QTL) are presented in Fig. 1a, and the estimated FST scores of the 400 K SNPS are shown in Fig. 1b for the scenario when the linkage disequilibrium (LD) between adjacent markers equals 0.7 (Additional file 1: Figure S1 represents the results for population P1)

  • The distribution of simulated QTL across the 10 chromosomes based on their FST score for population P2 (LD = 0.7) for the top 10 K and 5 K selected Single nucleotide polymorphisms (SNPs), are represented in Fig. 2a and b, respectively

Read more

Summary

Introduction

It becomes clear that the increase in the density of marker panels and even the use of sequence data didn’t result in any meaningful increase in the accuracy of genomic selection (GS) using either regression (RM) or variance component (VC) approaches. This is in part due to the limitations of current methods. Chang et al [6] proposed using population genetic parameters that can be derived from the existing marker data to enhance the prioritization process Their FST based prioritization resulted in sight superiority compared to BayesB

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call