Abstract

Multimodal fusion of different types of neural image data provides an irreplaceable opportunity to take advantages of complementary cross-modal information that may only partially be contained in single modality. To jointly analyze multimodal data, deep neural networks can be especially useful because many studies have suggested that deep learning strategy is very efficient to reveal complex and non-linear relations buried in the data. However, most deep models, e.g., convolutional neural network and its numerous extensions, can only operate on regular Euclidean data like voxels in 3D MRI. The interrelated and hidden structures that beyond the grid neighbors, such as brain connectivity, may be overlooked. Moreover, how to effectively incorporate neuroscience knowledge into multimodal data fusion with a single deep framework is understudied. In this work, we developed a graph-based deep neural network to simultaneously model brain structure and function in Mild Cognitive Impairment (MCI): the topology of the graph is initialized using structural network (from diffusion MRI) and iteratively updated by incorporating functional information (from functional MRI) to maximize the capability of differentiating MCI patients from elderly normal controls. This resulted in a new connectome by exploring "deep relations" between brain structure and function in MCI patients and we named it as Deep Brain Connectome. Though deep brain connectome is learned individually, it shows consistent patterns of alteration comparing to structural network at group level. With deep brain connectome, our developed deep model can achieve 92.7% classification accuracy on ADNI dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call