Abstract

Abstract Synergistic optimization is a process combining different elements achieving better results than would be possible by optimizing each element individually. Our objective will be to design systems where genotyping and phenotyping complement each other to achieve superior performance. Deep phenotyping can be defined as a comprehensive and detailed approach to characterize traits of animals, including those based on the study of intermediate phenotypes representing not only the phenome, but also e.g., the metabolome, the proteome, and the transcriptome (i.e., -omics phenotypes). Often these phenotypes are also called high-throughput but this should not be considered being a synonym for being cheap. Interplay between genotyping and (deep) phenotyping has a major impact on the acquisition of genetic knowledge but could also be used in the context of genomic evaluations (i.e., as intermediate correlated features such as high-throughput or -omics phenotypes). While past studies have largely focused on ways to integrate deep phenotyping to gain new knowledge but also to support genetic and genomic evaluations, phenotyping and especially deep phenotyping currently requires massive investment, limiting its effective and widespread use under field, but also in experimental conditions. This problem extends to complex phenotype-taking as feed intake, methane emissions or in general use of expensive sensors. Even with all the possible investments, many phenotyping efforts are limited by the poor choice of animals. This presentation will address and introduce the topic of genome-enabled optimization of (deep) phenotyping taking into account the facts that genotypes are in many cases much cheaper than phenotyping, can be performed at early stages, and can help organize animal sampling in an efficient manner. The last point is also linked to validation techniques comparing predicted to observed phenotypes and closely related to recursive genomic reference population building issues. Techniques used in other fields such as chemometrics (i.e., infrared spectra prediction building) will be proposed to optimize covering genetic variability. In the context of recursively performed genomic evaluations (i.e., increasing reference population at each round) population accuracy and bias of prediction will be addressed. Moreover, long term genetic contribution and population structure issues will have to be considered. Based on literature and practical examples systematically organized sampling techniques will be presented to discuss basic principles as covering observed genomic and expected phenotypic diversity and long-term effects of recursively created genomic reference populations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.