Abstract

Recent advancements in AI, big data analytics, and magnetic resonance imaging (MRI) have revolutionized the study of brain diseases such as Alzheimer's Disease (AD). However, most AI models used for neuroimaging classification tasks have limitations in their learning strategies, that is batch training without the incremental learning capability. To address such limitations, the systematic Brain Informatics methodology is reconsidered to realize evidence combination and fusion computing with multi-modal neuroimaging data through continuous learning. Specifically, we introduce the BNLoop-GAN (Loop-based Generative Adversarial Network for Brain Network) model, utilizing multiple techniques such as conditional generation, patch-based discrimination, and Wasserstein gradient penalty to learn the implicit distribution of brain networks. Moreover, a multiple-loop-learning algorithm is developed to combine evidence with better sample contribution ranking during training processes. The effectiveness of our approach is demonstrated through a case study on the classification of individuals with AD and healthy control groups using various experimental design strategies and multi-modal brain networks. The BNLoop-GAN model with multi-modal brain networks and multiple-loop-learning can improve classification performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.