Abstract

Multi-modality data convey complementary information that can be used to improve the accuracy of prediction models in disease diagnosis. However, effectively integrating multi-modality data remains a challenging problem, especially when the data are incomplete. For instance, more than half of the subjects in the Alzheimer's disease neuroimaging initiative (ADNI) database have no fluorodeoxyglucose positron emission tomography and cerebrospinal fluid data. Currently, there are two commonly used strategies to handle the problem of incomplete data: 1) discard samples having missing features; and 2) impute those missing values via specific techniques. In the first case, a significant amount of useful information is lost and, in the second case, additional noise and artifacts might be introduced into the data. Also, previous studies generally focus on the pairwise relationships among subjects, without considering their underlying complex (e.g., high-order) relationships. To address these issues, in this paper, we propose a multi-hypergraph learning method for dealing with incomplete multimodality data. Specifically, we first construct multiple hypergraphs to represent the high-order relationships among subjects by dividing them into several groups according to the availability of their data modalities. A hypergraph regularized transductive learning method is then applied to these groups for automatic diagnosis of brain diseases. Extensive evaluation of the proposed method using all subjects in the baseline ADNI database indicates that our method achieves promising results in AD/MCI classification, compared with the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call