Abstract

Multi-modal classification has demonstrated its superiority over conventional single-modal based methods on Alzheimer’s disease (AD) diagnosis, and multi-modal feature selection has attracted increasing attention. However, most previous approaches use a fixed affinity matrix to describe the local neighborhood relations among samples, and only consider the intra-modal similarity while ignoring the inter-modal similarity. Besides, they generally treat all samples equally and neglect the negative influence of noise and outliers. For solving these problems, this paper proposes a new multi-level Graph regularized Robust Multi-modal Feature Selection method called GRMFS that simultaneously performs noise-robust feature selection and adaptive multi-level similarity preservation. On the one hand, GRMFS introduces an ɛ-capped ℓ2-norm loss into regression framework to improve the robustness against outliers, which adaptively assigns a weight to each sample. On the other hand, to explore the intrinsic multi-modal local structures, GRMFS simultaneously learns intra-modal and inter-modal local similarities, and preserves them in subspace to guide feature selection. Experiments on real AD database illustrate the advantages of our proposed in identifying disease status compared with other approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call