Abstract
Learning from the multimodal brain imaging data attracts a large amount of attention in medical image analysis due to the proliferation of multimodal data collection. It is widely accepted that multimodal data can provide complementary information than mining from a single modality. However, unifying the image-based knowledge from the multimodal data is very challenging due to different image signals, resolution, data structure, etc.. In this study, we design a supervised deep model to jointly analyze brain morphometry and functional connectivity on the cortical surface and we name it deep multimodal brain network learning (DMBNL). Two graph-based kernels, i.e., geometry-aware surface kernel (GSK) and topology-aware network kernel (TNK), are proposed for processing the cortical surface morphometry and brain functional network. The vertex features on the cortical surface from GSK is pooled and feed into TNK as its initial regional features. In the end, the graph-level feature is computed for each individual and thus can be applied for classification tasks. We test our model on a large autism imaging dataset. The experimental results prove the effectiveness of our model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Proceedings. IEEE International Symposium on Biomedical Imaging
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.