Low-rank Multiview Subspace Learning (LMvSL) has shown great potential in cross-view classification in recent years. Despite their empirical success, existing LMvSL-based methods are incapable of handling well view discrepancy and discriminancy simultaneously, which, thus, leads to performance degradation when there is a large discrepancy among multiview data. To circumvent this drawback, motivated by the block-diagonal representation learning, we propose structured low-rank matrix recovery (SLMR), a unique method of effectively removing view discrepancy and improving discriminancy through the recovery of the structured low-rank matrix. Furthermore, recent low-rank modeling provides a satisfactory solution to address the data contaminated by the predefined assumptions of noise distribution, such as Gaussian or Laplacian distribution. However, these models are not practical, since complicated noise in practice may violate those assumptions and the distribution is generally unknown in advance. To alleviate such a limitation, modal regression is elegantly incorporated into the framework of SLMR (termed MR-SLMR). Different from previous LMvSL-based methods, our MR-SLMR can handle any zero-mode noise variable that contains a wide range of noise, such as Gaussian noise, random noise, and outliers. The alternating direction method of multipliers (ADMM) framework and half-quadratic theory are used to optimize efficiently MR-SLMR. Experimental results on four public databases demonstrate the superiority of MR-SLMR and its robustness to complicated noise.
Read full abstract