Robust Subspace Learning

  • Abstract
  • Literature Map
  • References
  • Citations
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Subspace learning is widely used in extracting discriminative features for classification. However, when data are contaminated with severe noise, the performance of most existing subspace learning methods would be limited. Recent advances in low-rank modeling provide effective solutions for removing noise or outliers contained in sample sets, which motivates us to take advantages of low-rank constraints in order to exploit robust and discriminative subspace for classification. In this chapter, we introduce a discriminative subspace learning method named Supervised Regularization based Robust Subspace (SRRS) approach, by incorporating the low-rank constraint. SRRS seeks low-rank representations from the noisy data, and learns a discriminative subspace from the recovered clean data jointly. A supervised regularization function is designed to make use of the class label information and therefore to enhance the discriminability of subspace. Our approach is formulated as a constrained rank minimization problem. We design an inexact augmented Lagrange multiplier (ALM) optimization algorithm to solve it. Unlike the existing sparse representation and low-rank learning methods, our approach learns a low-dimensional subspace from recovered data, and explicitly incorporates the supervised information. Our approach and some baselines are evaluated on the COIL-100, ALOI, Extended YaleB, FERET, AR, and KinFace databases. Experimental results demonstrate the effectiveness of our approach, especially when the data contain considerable noise or variations.

ReferencesShowing 10 of 41 papers
  • Open Access Icon
  • Cite Count Icon 5227
  • 10.1007/s10208-009-9045-5
Exact Matrix Completion via Convex Optimization
  • Apr 3, 2009
  • Foundations of Computational Mathematics
  • Emmanuel J Candès + 1 more

  • Open Access Icon
  • Cite Count Icon 238
  • 10.1109/cvpr.2013.93
Learning Structured Low-Rank Representations for Image Classification
  • Jun 1, 2013
  • Yangmuzi Zhang + 2 more

  • Cite Count Icon 67
  • 10.1109/tkde.2014.2365793
Learning Balanced and Unbalanced Graphs via Low-Rank Coding
  • May 1, 2015
  • IEEE Transactions on Knowledge and Data Engineering
  • Sheng Li + 1 more

  • Cite Count Icon 38
  • 10.1016/j.neucom.2012.12.012
Low-rank representation based discriminative projection for robust feature extraction
  • Jan 14, 2013
  • Neurocomputing
  • Nan Zhang + 1 more

  • Open Access Icon
  • Cite Count Icon 52
  • 10.1137/1.9781611973440.19
Robust Subspace Discovery through Supervised Low-Rank Constraints
  • Apr 28, 2014
  • Sheng Li + 1 more

  • Cite Count Icon 140
  • 10.1016/s0167-8655(02)00207-6
A generalized Foley–Sammon transform based on generalized fisher discriminant criterion and its application to face recognition
  • Oct 9, 2002
  • Pattern Recognition Letters
  • Yue-Fei Guo + 4 more

  • Cite Count Icon 300
  • 10.1007/s11263-014-0696-6
Generalized Transfer Subspace Learning Through Low-Rank Constraint
  • Jan 31, 2014
  • International Journal of Computer Vision
  • Ming Shao + 2 more

  • Open Access Icon
  • Cite Count Icon 9318
  • 10.1109/tpami.2008.79
Robust Face Recognition via Sparse Representation
  • Jan 1, 2009
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • J Wright + 4 more

  • Open Access Icon
  • Cite Count Icon 5194
  • 10.1007/978-1-4757-3264-1
The Nature of Statistical Learning Theory
  • Jan 1, 2000
  • Vladimir N Vapnik

  • Open Access Icon
  • Cite Count Icon 34
  • 10.1109/iccv.2013.440
Distributed Low-Rank Subspace Segmentation
  • Dec 1, 2013
  • Ameet Talwalkar + 4 more

CitationsShowing 2 of 2 papers
  • Research Article
  • Cite Count Icon 12
  • 10.1109/tnnls.2020.2978761
Kernelized Sparse Bayesian Matrix Factorization.
  • Jan 1, 2021
  • IEEE Transactions on Neural Networks and Learning Systems
  • Caoyuan Li + 5 more

Extracting low-rank and/or sparse structures using matrix factorization techniques has been extensively studied in the machine learning community. Kernelized matrix factorization (KMF) is a powerful tool to incorporate side information into the low-rank approximation model, which has been applied to solve the problems of data mining, recommender systems, image restoration, and machine vision. However, most existing KMF models rely on specifying the rows and columns of the data matrix through a Gaussian process prior and have to tune manually the rank. There are also computational issues of existing models based on regularization or the Markov chain Monte Carlo. In this article, we develop a hierarchical kernelized sparse Bayesian matrix factorization (KSBMF) model to integrate side information. The KSBMF automatically infers the parameters and latent variables including the reduced rank using the variational Bayesian inference. In addition, the model simultaneously achieves low-rankness through sparse Bayesian learning and columnwise sparsity through an enforced constraint on latent factor matrices. We further connect the KSBMF with the nonlocal image processing framework to develop two algorithms for image denoising and inpainting. Experimental results demonstrate that KSBMF outperforms the state-of-the-art approaches for these image-restoration tasks under various levels of corruption.

  • Open Access Icon
  • Research Article
  • Cite Count Icon 9
  • 10.1109/lsp.2020.3011896
Online Reweighted Least Squares Robust PCA
  • Jan 1, 2020
  • IEEE Signal Processing Letters
  • Athanasios A Rontogiannis + 2 more

The letter deals with the problem known as robust principal component analysis (RPCA), that is, the decomposition of a data matrix as the sum of a low-rank matrix component and a sparse matrix component. After expressing the low-rank matrix component in factorized form, we develop a novel online RPCA algorithm that is based entirely on reweighted least squares recursions and is appropriate for sequential data processing. The proposed algorithm is fast, memory optimal and, as corroborated by indicative empirical results on simulated data and a video processing application, competitive to the state-of-the-art in terms of estimation performance.

Similar Papers
  • Research Article
  • Cite Count Icon 126
  • 10.1109/tnnls.2015.2464090
Learning Robust and Discriminative Subspace With Low-Rank Constraints.
  • Aug 31, 2015
  • IEEE Transactions on Neural Networks and Learning Systems
  • Sheng Li + 1 more

In this paper, we aim at learning robust and discriminative subspaces from noisy data. Subspace learning is widely used in extracting discriminative features for classification. However, when data are contaminated with severe noise, the performance of most existing subspace learning methods would be limited. Recent advances in low-rank modeling provide effective solutions for removing noise or outliers contained in sample sets, which motivates us to take advantage of low-rank constraints in order to exploit robust and discriminative subspace for classification. In particular, we present a discriminative subspace learning method called the supervised regularization-based robust subspace (SRRS) approach, by incorporating the low-rank constraint. SRRS seeks low-rank representations from the noisy data, and learns a discriminative subspace from the recovered clean data jointly. A supervised regularization function is designed to make use of the class label information, and therefore to enhance the discriminability of subspace. Our approach is formulated as a constrained rank-minimization problem. We design an inexact augmented Lagrange multiplier optimization algorithm to solve it. Unlike the existing sparse representation and low-rank learning methods, our approach learns a low-dimensional subspace from recovered data, and explicitly incorporates the supervised information. Our approach and some baselines are evaluated on the COIL-100, ALOI, Extended YaleB, FERET, AR, and KinFace databases. The experimental results demonstrate the effectiveness of our approach, especially when the data contain considerable noise or variations.

  • Conference Article
  • Cite Count Icon 52
  • 10.1137/1.9781611973440.19
Robust Subspace Discovery through Supervised Low-Rank Constraints
  • Apr 28, 2014
  • Sheng Li + 1 more

Subspace learning is a popular approach for feature extraction and classification. However, its performance would be heavily degraded when data are corrupted by large amounts of noise. Inspired by recent work in matrix recovery, we tackle this problem by exploiting a subspace that is robust to noise and large variability for classification. Specifically, we propose a novel Supervised Regularization based Robust Subspace (SRRS) approach via low-rank learning. Unlike existing subspace methods, our approach jointly learns low-rank representations and a robust subspace from noisy observations. At the same time, to improve the classification performance, class label information is incorporated as supervised regularization. The problem can then be formulated as a constrained rank minimization objective function, which can be effectively solved by the inexact augmented Lagrange multiplier (ALM) algorithm. Our approach differs from current sparse representation and low-rank learning methods in that it explicitly learns a low-dimensional subspace where the supervised information is incorporated. Extensive experimental results on four datasets demonstrate that our approach outperforms the state-of-the-art subspace and low-rank learning methods in almost all cases, especially when the data contain large variations or are heavily corrupted by noise.

  • Research Article
  • 10.1142/s0219691317500606
Robust subspace learning method for hyperspectral image classification
  • Nov 1, 2017
  • International Journal of Wavelets, Multiresolution and Information Processing
  • Haoliang Yuan + 1 more

Subspace learning (SL) is an important technology to extract the discriminative features for hyperspectral image (HSI) classification. However, in practical applications, some acquired HSIs are contaminated with considerable noise during the imaging process. In this case, most of existing SL methods yield limited performance for subsequent classification procedure. In this paper, we propose a robust subspace learning (RSL) method, which utilizes a local linear regression and a supervised regularization function simultaneously. To effectively incorporate the spatial information, a local linear regression is used to seek the recovered data from the noisy data under a spatial set. The recovered data not only reduce the noise effect but also include the spectral-spatial information. To utilize the label information, a supervised regularization function based on the idea of Fisher criterion is used to learn a discriminative subspace from the recovered data. To optimize RSL, we develop an efficient iterative algorithm. Extensive experimental results demonstrate that RSL greatly outperforms many existing SL methods when the HSI data contain considerable noise.

  • Research Article
  • Cite Count Icon 12
  • 10.1016/j.isatra.2015.12.011
Discriminative sparse subspace learning and its application to unsupervised feature selection
  • Jan 20, 2016
  • ISA Transactions
  • Nan Zhou + 4 more

Discriminative sparse subspace learning and its application to unsupervised feature selection

  • Research Article
  • Cite Count Icon 6
  • 10.1016/j.eswa.2024.123831
Discriminative sparse subspace learning with manifold regularization
  • Mar 26, 2024
  • Expert Systems with Applications
  • Wenyi Feng + 10 more

Discriminative sparse subspace learning with manifold regularization

  • Research Article
  • Cite Count Icon 27
  • 10.1109/tcsvt.2008.2004933
Locality Versus Globality: Query-Driven Localized Linear Models for Facial Image Computing
  • Dec 1, 2008
  • IEEE Transactions on Circuits and Systems for Video Technology
  • Yun Fu + 4 more

Conventional subspace learning or recent feature extraction methods consider globality as the key criterion to design discriminative algorithms for image classification. We demonstrate in this paper that applying the local manner in sample space, feature space, and learning space via linear subspace learning can sufficiently boost the discriminating power, as measured by discriminating power coefficient (DPC). The proposed solution achieves good classification accuracy gains and shows computationally efficient. Particularly, we approximate the global nonlinearity through a multimodal localized piecewise subspace learning framework, in which three locality criteria can work individually or jointly for any new subspace learning algorithm design. It turns out that most existing subspace learning methods can be unified in such a common framework embodying either the global or local learning manner. On the other hand, we address the problem of numerical difficulty in the large-size pattern classification case, where many local variations cannot be adequately handled by a single global model. By localizing the modeling, the classification error rate estimation is also localized and thus it appears to be more robust and flexible for the model selection among different model candidates. As a new algorithm design based on the proposed framework, the query-driven locally adaptive (QDLA) mixture-of-experts model for robust face recognition and head pose estimation is presented. Experiments demonstrate the local approach to be effective, robust, and fast for large size, multiclass, and multivariance data sets.

  • Research Article
  • Cite Count Icon 16
  • 10.1016/j.neucom.2019.07.049
Multi-view laplacian least squares for human emotion recognition
  • Aug 21, 2019
  • Neurocomputing
  • Shuai Guo + 6 more

Multi-view laplacian least squares for human emotion recognition

  • Research Article
  • Cite Count Icon 2
  • 10.1142/s0218001419510066
Discriminative Low-Rank Subspace Learning with Nonconvex Penalty
  • Sep 1, 2019
  • International Journal of Pattern Recognition and Artificial Intelligence
  • Kan Xie + 3 more

Subspace learning has been widely utilized to extract discriminative features for classification task, such as face recognition, even when facial images are occluded or corrupted. However, the performance of most existing methods would be degraded significantly in the scenario of that data being contaminated with severe noise, especially when the magnitude of the gross corruption can be arbitrarily large. To this end, in this paper, a novel discriminative subspace learning method is proposed based on the well-known low-rank representation (LRR). Specifically, a discriminant low-rank representation and the projecting subspace are learned simultaneously, in a supervised way. To avoid the deviation from the original solution by using some relaxation, we adopt the Schatten [Formula: see text]-norm and [Formula: see text]-norm, instead of the nuclear norm and [Formula: see text]-norm, respectively. Experimental results on two famous databases, i.e. PIE and ORL, demonstrate that the proposed method achieves better classification scores than the state-of-the-art approaches.

  • Research Article
  • Cite Count Icon 26
  • 10.1016/j.neunet.2018.08.003
Low-rank and sparse embedding for dimensionality reduction
  • Aug 18, 2018
  • Neural Networks
  • Na Han + 5 more

Low-rank and sparse embedding for dimensionality reduction

  • Research Article
  • Cite Count Icon 33
  • 10.1109/tcyb.2018.2882924
Recursive Discriminative Subspace Learning With l1 -Norm Distance Constraint.
  • Dec 11, 2018
  • IEEE Transactions on Cybernetics
  • Dong Zhang + 3 more

In feature learning tasks, one of the most enormous challenges is to generate an efficient discriminative subspace. In this paper, we propose a novel subspace learning method, named recursive discriminative subspace learning with an l1 -norm distance constraint (RDSL). RDSL can robustly extract features from the contaminated images and learn a discriminative subspace. With the use of an inequation-based l1 -norm distance metric constraint, the minimized l1 -norm distance metric objective function with slack variables induces samples in the same class to cluster as close as possible, meanwhile samples from different classes can be separated from each other as far as possible. By utilizing l1 -norm items in both the objective function and the constraint, RDSL can well handle the noisy data and outliers. In addition, the large margin formulation makes the proposed method insensitive to initializations. We describe two approaches to solve RDSL with a recursive strategy. Experimental results on six benchmark datasets, including the original data and the contaminated data, demonstrate that RDSL outperforms the state-of-the-art methods.

  • Research Article
  • Cite Count Icon 101
  • 10.1109/tcsvt.2014.2305495
Human Gait Recognition via Sparse Discriminant Projection Learning
  • Oct 1, 2014
  • IEEE Transactions on Circuits and Systems for Video Technology
  • Zhihui Lai + 3 more

As an important biometric feature, human gait has great potential in video-surveillance-based applications. In this paper, we focus on the matrix representation-based human gait recognition and propose a novel discriminant subspace learning method called sparse bilinear discriminant analysis (SBDA). SBDA extends the recently proposed matrix-representation-based discriminant analysis methods to sparse cases. By introducing the L <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> and L <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> norms into the objective function of SBDA, two interrelated sparse discriminant subspaces can be obtained for gait feature extraction. Since the optimization problem has no closed-form solutions, an iterative method is designed to compute the optimal sparse subspace using the L <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> and L <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> norms sparse regression. Theoretical analyses reveal the close relationship between SBDA and previous matrix-representation-based discriminant analysis methods. Since each nonzero element in each subspace is selected from the most important variables/factors, SBDA is potential to perform equivalent to or even better than the state-of-the-art subspace learning methods in gait recognition. Moreover, using the strategy of SBDA plus linear discriminant analysis (LDA), we can further improve the performance. A set of experiments on the standard USF HumanID and CASIA gait databases demonstrate that the proposed SBDA and SBDA + LDA can obtain competitive performance.

  • Book Chapter
  • Cite Count Icon 1
  • 10.1007/978-3-319-27674-8_22
Discriminant Manifold Learning via Sparse Coding for Image Analysis
  • Jan 1, 2016
  • Meng Pang + 3 more

Traditional subspace learning methods directly calculate the statistical properties of the original input images, while ignoring different contributions of different image components. In fact, the noise (e.g., illumination, shadow) in the image often has a negative influence on learning the desired subspace and should have little contribution to image recognition. To tackle this problem, we propose a novel subspace learning method named Discriminant Manifold Learning via Sparse Coding (DML_SC). In our method, we first decompose the input image into several components via dictionary learning, and then regroup the components into a More Important Part (MIP) and a Less Important Part (LIP). The MIP can be regarded as the clean part of the original image residing on a nonlinear submanifold, while LIP as noise in the image. Finally, the MIP and LIP are incorporated into manifold learning to learn a desired discriminative subspace. The proposed method is able to deal with data with and without labels, yielding supervised and unsupervised DML SCs. Experimental results show that DML_SC achieves best performance on image recognition and clustering tasks compared with well-known subspace learning and sparse representation methods.

  • Research Article
  • Cite Count Icon 40
  • 10.1109/tcyb.2016.2533430
Spectral-Spatial Shared Linear Regression for Hyperspectral Image Classification.
  • Apr 1, 2017
  • IEEE Transactions on Cybernetics
  • Haoliang Yuan + 1 more

Classification of the pixels in hyperspectral image (HSI) is an important task and has been popularly applied in many practical applications. Its major challenge is the high-dimensional small-sized problem. To deal with this problem, lots of subspace learning (SL) methods are developed to reduce the dimension of the pixels while preserving the important discriminant information. Motivated by ridge linear regression (RLR) framework for SL, we propose a spectral-spatial shared linear regression method (SSSLR) for extracting the feature representation. Comparing with RLR, our proposed SSSLR has the following two advantages. First, we utilize a convex set to explore the spatial structure for computing the linear projection matrix. Second, we utilize a shared structure learning model, which is formed by original data space and a hidden feature space, to learn a more discriminant linear projection matrix for classification. To optimize our proposed method, an efficient iterative algorithm is proposed. Experimental results on two popular HSI data sets, i.e., Indian Pines and Salinas demonstrate that our proposed methods outperform many SL methods.

  • PDF Download Icon
  • Research Article
  • 10.1155/2020/8872348
Bio-Inspired Structure Representation Based Cross-View Discriminative Subspace Learning via Simultaneous Local and Global Alignment
  • Nov 4, 2020
  • Complexity
  • Ao Li + 5 more

Recently, cross-view feature learning has been a hot topic in machine learning due to the wide applications of multiview data. Nevertheless, the distribution discrepancy between cross-views leads to the fact that instances of the different views from same class are farther than those within the same view but from different classes. To address this problem, in this paper, we develop a novel cross-view discriminative feature subspace learning method inspired by layered visual perception from human. Firstly, the proposed method utilizes a separable low-rank self-representation model to disentangle the class and view structure layers, respectively. Secondly, a local alignment is constructed with two designed graphs to guide the subspace decomposition in a pairwise way. Finally, the global discriminative constraint on distribution center in each view is designed for further alignment improvement. Extensive cross-view classification experiments on several public datasets prove that our proposed method is more effective than other existing feature learning methods.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 6
  • 10.3390/electronics11050810
Robust Latent Common Subspace Learning for Transferable Feature Representation
  • Mar 4, 2022
  • Electronics
  • Shanhua Zhan + 2 more

This paper proposes a novel robust latent common subspace learning (RLCSL) method by integrating low-rank and sparse constraints into a joint learning framework. Specifically, we transform the data from source and target domains into a latent common subspace to perform the data reconstruction, i.e., the transformed source data is used to reconstruct the transformed target data. We impose joint low-rank and sparse constraints on the reconstruction coefficient matrix which can achieve following objectives: (1) the data from different domains can be interlaced by using the low-rank constraint; (2) the data from different domains but with the same label can be aligned together by using the sparse constraint. In this way, the new feature representation in the latent common subspace is discriminative and transferable. To learn a suitable classifier, we also integrate the classifier learning and feature representation learning into a unified objective and thus the high-level semantics label (data label) is fully used to guide the learning process of these two tasks. Experiments are conducted on diverse data sets for image, object, and document classifications, and encouraging experimental results show that the proposed method outperforms some state-of-the-arts methods.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon