Abstract
Combining multi-modality brain data for disease diagnosis commonly leads to improved performance. A challenge in using multimodality data is that the data are commonly incomplete; namely, some modality might be missing for some subjects. In this work, we proposed a deep learning based framework for estimating multi-modality imaging data. Our method takes the form of convolutional neural networks, where the input and output are two volumetric modalities. The network contains a large number of trainable parameters that capture the relationship between input and output modalities. When trained on subjects with all modalities, the network can estimate the output modality given the input modality. We evaluated our method on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, where the input and output modalities are MRI and PET images, respectively. Results showed that our method significantly outperformed prior methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.