Abstract

AbstractBackgroundUnderstanding the genetic architecture of brain structure and Alzheimer’s disease is challenging, partly due to difficulties in designing robust, non‐biased descriptors of the brain.MethodWe present approaches to derive robust brain imaging phenotypes using unsupervised deep representation learning. Training a 3‐D convolutional autoencoder model with reconstruction loss, we derived a vector representation (termed endophenotype) that captures rich morphological information of the brain. Further, we developed a perturbation‐based decoder interpretation approach that can highlight brain regions that are most relevant to individual endophenotypes.ResultUsing UK Biobank data, we trained the model using over 6,000 UK Biobank (UKBB) participants’ T1 or T2‐FLAIR (T2) brain MRIs. The model is used to derive 128 endophenotypes. The endophenotypes have a mean heritability of 0.3 and the GWAS of which have identified 43 independent loci in the held‐out UKBiobank dataset (discovery n = 22,962/replication n = 12,848), among which 13 loci have not been previously reported by the UK Biobank Big40 study.ConclusionUsing UK Biobank data, we show that, compared to traditional brain image‐derived phenotypes (IDPs), our endophenotypes are more heritable and have higher power for genetic discovery.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.