Abstract
In this paper, we propose multimodal extensions of the recently introduced sparse subspace clustering (SSC) and low-rank representation (LRR) based subspace clustering algorithms for clustering data lying in a union of subspaces. Given multimodal data, our method simultaneously clusters data in the individual modalities according to their subspaces. In our formulation, we exploit the self expressiveness property of each sample in its respective modality and enforce the common representation across the modalities. We modify our model so that it is robust to noise. Furthermore, we kernelize the proposed algorithms to handle nonlinearities in data. The optimization problems are solved efficiently using the alternative direction method of multiplier (ADMM). Experiments on face clustering indicate the proposed method performs favorably compared to state-of-the-art subspace clustering methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.