Robust Subspace Learning
Subspace learning is widely used in extracting discriminative features for classification. However, when data are contaminated with severe noise, the performance of most existing subspace learning methods would be limited. Recent advances in low-rank modeling provide effective solutions for removing noise or outliers contained in sample sets, which motivates us to take advantages of low-rank constraints in order to exploit robust and discriminative subspace for classification. In this chapter, we introduce a discriminative subspace learning method named Supervised Regularization based Robust Subspace (SRRS) approach, by incorporating the low-rank constraint. SRRS seeks low-rank representations from the noisy data, and learns a discriminative subspace from the recovered clean data jointly. A supervised regularization function is designed to make use of the class label information and therefore to enhance the discriminability of subspace. Our approach is formulated as a constrained rank minimization problem. We design an inexact augmented Lagrange multiplier (ALM) optimization algorithm to solve it. Unlike the existing sparse representation and low-rank learning methods, our approach learns a low-dimensional subspace from recovered data, and explicitly incorporates the supervised information. Our approach and some baselines are evaluated on the COIL-100, ALOI, Extended YaleB, FERET, AR, and KinFace databases. Experimental results demonstrate the effectiveness of our approach, especially when the data contain considerable noise or variations.
5227
- 10.1007/s10208-009-9045-5
- Apr 3, 2009
- Foundations of Computational Mathematics
238
- 10.1109/cvpr.2013.93
- Jun 1, 2013
67
- 10.1109/tkde.2014.2365793
- May 1, 2015
- IEEE Transactions on Knowledge and Data Engineering
38
- 10.1016/j.neucom.2012.12.012
- Jan 14, 2013
- Neurocomputing
52
- 10.1137/1.9781611973440.19
- Apr 28, 2014
140
- 10.1016/s0167-8655(02)00207-6
- Oct 9, 2002
- Pattern Recognition Letters
300
- 10.1007/s11263-014-0696-6
- Jan 31, 2014
- International Journal of Computer Vision
9318
- 10.1109/tpami.2008.79
- Jan 1, 2009
- IEEE Transactions on Pattern Analysis and Machine Intelligence
5194
- 10.1007/978-1-4757-3264-1
- Jan 1, 2000
34
- 10.1109/iccv.2013.440
- Dec 1, 2013
- Research Article
12
- 10.1109/tnnls.2020.2978761
- Jan 1, 2021
- IEEE Transactions on Neural Networks and Learning Systems
Extracting low-rank and/or sparse structures using matrix factorization techniques has been extensively studied in the machine learning community. Kernelized matrix factorization (KMF) is a powerful tool to incorporate side information into the low-rank approximation model, which has been applied to solve the problems of data mining, recommender systems, image restoration, and machine vision. However, most existing KMF models rely on specifying the rows and columns of the data matrix through a Gaussian process prior and have to tune manually the rank. There are also computational issues of existing models based on regularization or the Markov chain Monte Carlo. In this article, we develop a hierarchical kernelized sparse Bayesian matrix factorization (KSBMF) model to integrate side information. The KSBMF automatically infers the parameters and latent variables including the reduced rank using the variational Bayesian inference. In addition, the model simultaneously achieves low-rankness through sparse Bayesian learning and columnwise sparsity through an enforced constraint on latent factor matrices. We further connect the KSBMF with the nonlocal image processing framework to develop two algorithms for image denoising and inpainting. Experimental results demonstrate that KSBMF outperforms the state-of-the-art approaches for these image-restoration tasks under various levels of corruption.
- Research Article
9
- 10.1109/lsp.2020.3011896
- Jan 1, 2020
- IEEE Signal Processing Letters
The letter deals with the problem known as robust principal component analysis (RPCA), that is, the decomposition of a data matrix as the sum of a low-rank matrix component and a sparse matrix component. After expressing the low-rank matrix component in factorized form, we develop a novel online RPCA algorithm that is based entirely on reweighted least squares recursions and is appropriate for sequential data processing. The proposed algorithm is fast, memory optimal and, as corroborated by indicative empirical results on simulated data and a video processing application, competitive to the state-of-the-art in terms of estimation performance.
- Research Article
126
- 10.1109/tnnls.2015.2464090
- Aug 31, 2015
- IEEE Transactions on Neural Networks and Learning Systems
In this paper, we aim at learning robust and discriminative subspaces from noisy data. Subspace learning is widely used in extracting discriminative features for classification. However, when data are contaminated with severe noise, the performance of most existing subspace learning methods would be limited. Recent advances in low-rank modeling provide effective solutions for removing noise or outliers contained in sample sets, which motivates us to take advantage of low-rank constraints in order to exploit robust and discriminative subspace for classification. In particular, we present a discriminative subspace learning method called the supervised regularization-based robust subspace (SRRS) approach, by incorporating the low-rank constraint. SRRS seeks low-rank representations from the noisy data, and learns a discriminative subspace from the recovered clean data jointly. A supervised regularization function is designed to make use of the class label information, and therefore to enhance the discriminability of subspace. Our approach is formulated as a constrained rank-minimization problem. We design an inexact augmented Lagrange multiplier optimization algorithm to solve it. Unlike the existing sparse representation and low-rank learning methods, our approach learns a low-dimensional subspace from recovered data, and explicitly incorporates the supervised information. Our approach and some baselines are evaluated on the COIL-100, ALOI, Extended YaleB, FERET, AR, and KinFace databases. The experimental results demonstrate the effectiveness of our approach, especially when the data contain considerable noise or variations.
- Conference Article
52
- 10.1137/1.9781611973440.19
- Apr 28, 2014
Subspace learning is a popular approach for feature extraction and classification. However, its performance would be heavily degraded when data are corrupted by large amounts of noise. Inspired by recent work in matrix recovery, we tackle this problem by exploiting a subspace that is robust to noise and large variability for classification. Specifically, we propose a novel Supervised Regularization based Robust Subspace (SRRS) approach via low-rank learning. Unlike existing subspace methods, our approach jointly learns low-rank representations and a robust subspace from noisy observations. At the same time, to improve the classification performance, class label information is incorporated as supervised regularization. The problem can then be formulated as a constrained rank minimization objective function, which can be effectively solved by the inexact augmented Lagrange multiplier (ALM) algorithm. Our approach differs from current sparse representation and low-rank learning methods in that it explicitly learns a low-dimensional subspace where the supervised information is incorporated. Extensive experimental results on four datasets demonstrate that our approach outperforms the state-of-the-art subspace and low-rank learning methods in almost all cases, especially when the data contain large variations or are heavily corrupted by noise.
- Research Article
- 10.1142/s0219691317500606
- Nov 1, 2017
- International Journal of Wavelets, Multiresolution and Information Processing
Subspace learning (SL) is an important technology to extract the discriminative features for hyperspectral image (HSI) classification. However, in practical applications, some acquired HSIs are contaminated with considerable noise during the imaging process. In this case, most of existing SL methods yield limited performance for subsequent classification procedure. In this paper, we propose a robust subspace learning (RSL) method, which utilizes a local linear regression and a supervised regularization function simultaneously. To effectively incorporate the spatial information, a local linear regression is used to seek the recovered data from the noisy data under a spatial set. The recovered data not only reduce the noise effect but also include the spectral-spatial information. To utilize the label information, a supervised regularization function based on the idea of Fisher criterion is used to learn a discriminative subspace from the recovered data. To optimize RSL, we develop an efficient iterative algorithm. Extensive experimental results demonstrate that RSL greatly outperforms many existing SL methods when the HSI data contain considerable noise.
- Research Article
12
- 10.1016/j.isatra.2015.12.011
- Jan 20, 2016
- ISA Transactions
Discriminative sparse subspace learning and its application to unsupervised feature selection
- Research Article
6
- 10.1016/j.eswa.2024.123831
- Mar 26, 2024
- Expert Systems with Applications
Discriminative sparse subspace learning with manifold regularization
- Research Article
27
- 10.1109/tcsvt.2008.2004933
- Dec 1, 2008
- IEEE Transactions on Circuits and Systems for Video Technology
Conventional subspace learning or recent feature extraction methods consider globality as the key criterion to design discriminative algorithms for image classification. We demonstrate in this paper that applying the local manner in sample space, feature space, and learning space via linear subspace learning can sufficiently boost the discriminating power, as measured by discriminating power coefficient (DPC). The proposed solution achieves good classification accuracy gains and shows computationally efficient. Particularly, we approximate the global nonlinearity through a multimodal localized piecewise subspace learning framework, in which three locality criteria can work individually or jointly for any new subspace learning algorithm design. It turns out that most existing subspace learning methods can be unified in such a common framework embodying either the global or local learning manner. On the other hand, we address the problem of numerical difficulty in the large-size pattern classification case, where many local variations cannot be adequately handled by a single global model. By localizing the modeling, the classification error rate estimation is also localized and thus it appears to be more robust and flexible for the model selection among different model candidates. As a new algorithm design based on the proposed framework, the query-driven locally adaptive (QDLA) mixture-of-experts model for robust face recognition and head pose estimation is presented. Experiments demonstrate the local approach to be effective, robust, and fast for large size, multiclass, and multivariance data sets.
- Research Article
16
- 10.1016/j.neucom.2019.07.049
- Aug 21, 2019
- Neurocomputing
Multi-view laplacian least squares for human emotion recognition
- Research Article
2
- 10.1142/s0218001419510066
- Sep 1, 2019
- International Journal of Pattern Recognition and Artificial Intelligence
Subspace learning has been widely utilized to extract discriminative features for classification task, such as face recognition, even when facial images are occluded or corrupted. However, the performance of most existing methods would be degraded significantly in the scenario of that data being contaminated with severe noise, especially when the magnitude of the gross corruption can be arbitrarily large. To this end, in this paper, a novel discriminative subspace learning method is proposed based on the well-known low-rank representation (LRR). Specifically, a discriminant low-rank representation and the projecting subspace are learned simultaneously, in a supervised way. To avoid the deviation from the original solution by using some relaxation, we adopt the Schatten [Formula: see text]-norm and [Formula: see text]-norm, instead of the nuclear norm and [Formula: see text]-norm, respectively. Experimental results on two famous databases, i.e. PIE and ORL, demonstrate that the proposed method achieves better classification scores than the state-of-the-art approaches.
- Research Article
26
- 10.1016/j.neunet.2018.08.003
- Aug 18, 2018
- Neural Networks
Low-rank and sparse embedding for dimensionality reduction
- Research Article
33
- 10.1109/tcyb.2018.2882924
- Dec 11, 2018
- IEEE Transactions on Cybernetics
In feature learning tasks, one of the most enormous challenges is to generate an efficient discriminative subspace. In this paper, we propose a novel subspace learning method, named recursive discriminative subspace learning with an l1 -norm distance constraint (RDSL). RDSL can robustly extract features from the contaminated images and learn a discriminative subspace. With the use of an inequation-based l1 -norm distance metric constraint, the minimized l1 -norm distance metric objective function with slack variables induces samples in the same class to cluster as close as possible, meanwhile samples from different classes can be separated from each other as far as possible. By utilizing l1 -norm items in both the objective function and the constraint, RDSL can well handle the noisy data and outliers. In addition, the large margin formulation makes the proposed method insensitive to initializations. We describe two approaches to solve RDSL with a recursive strategy. Experimental results on six benchmark datasets, including the original data and the contaminated data, demonstrate that RDSL outperforms the state-of-the-art methods.
- Research Article
101
- 10.1109/tcsvt.2014.2305495
- Oct 1, 2014
- IEEE Transactions on Circuits and Systems for Video Technology
As an important biometric feature, human gait has great potential in video-surveillance-based applications. In this paper, we focus on the matrix representation-based human gait recognition and propose a novel discriminant subspace learning method called sparse bilinear discriminant analysis (SBDA). SBDA extends the recently proposed matrix-representation-based discriminant analysis methods to sparse cases. By introducing the L <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> and L <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> norms into the objective function of SBDA, two interrelated sparse discriminant subspaces can be obtained for gait feature extraction. Since the optimization problem has no closed-form solutions, an iterative method is designed to compute the optimal sparse subspace using the L <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> and L <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> norms sparse regression. Theoretical analyses reveal the close relationship between SBDA and previous matrix-representation-based discriminant analysis methods. Since each nonzero element in each subspace is selected from the most important variables/factors, SBDA is potential to perform equivalent to or even better than the state-of-the-art subspace learning methods in gait recognition. Moreover, using the strategy of SBDA plus linear discriminant analysis (LDA), we can further improve the performance. A set of experiments on the standard USF HumanID and CASIA gait databases demonstrate that the proposed SBDA and SBDA + LDA can obtain competitive performance.
- Book Chapter
1
- 10.1007/978-3-319-27674-8_22
- Jan 1, 2016
Traditional subspace learning methods directly calculate the statistical properties of the original input images, while ignoring different contributions of different image components. In fact, the noise (e.g., illumination, shadow) in the image often has a negative influence on learning the desired subspace and should have little contribution to image recognition. To tackle this problem, we propose a novel subspace learning method named Discriminant Manifold Learning via Sparse Coding (DML_SC). In our method, we first decompose the input image into several components via dictionary learning, and then regroup the components into a More Important Part (MIP) and a Less Important Part (LIP). The MIP can be regarded as the clean part of the original image residing on a nonlinear submanifold, while LIP as noise in the image. Finally, the MIP and LIP are incorporated into manifold learning to learn a desired discriminative subspace. The proposed method is able to deal with data with and without labels, yielding supervised and unsupervised DML SCs. Experimental results show that DML_SC achieves best performance on image recognition and clustering tasks compared with well-known subspace learning and sparse representation methods.
- Research Article
40
- 10.1109/tcyb.2016.2533430
- Apr 1, 2017
- IEEE Transactions on Cybernetics
Classification of the pixels in hyperspectral image (HSI) is an important task and has been popularly applied in many practical applications. Its major challenge is the high-dimensional small-sized problem. To deal with this problem, lots of subspace learning (SL) methods are developed to reduce the dimension of the pixels while preserving the important discriminant information. Motivated by ridge linear regression (RLR) framework for SL, we propose a spectral-spatial shared linear regression method (SSSLR) for extracting the feature representation. Comparing with RLR, our proposed SSSLR has the following two advantages. First, we utilize a convex set to explore the spatial structure for computing the linear projection matrix. Second, we utilize a shared structure learning model, which is formed by original data space and a hidden feature space, to learn a more discriminant linear projection matrix for classification. To optimize our proposed method, an efficient iterative algorithm is proposed. Experimental results on two popular HSI data sets, i.e., Indian Pines and Salinas demonstrate that our proposed methods outperform many SL methods.
- Research Article
- 10.1155/2020/8872348
- Nov 4, 2020
- Complexity
Recently, cross-view feature learning has been a hot topic in machine learning due to the wide applications of multiview data. Nevertheless, the distribution discrepancy between cross-views leads to the fact that instances of the different views from same class are farther than those within the same view but from different classes. To address this problem, in this paper, we develop a novel cross-view discriminative feature subspace learning method inspired by layered visual perception from human. Firstly, the proposed method utilizes a separable low-rank self-representation model to disentangle the class and view structure layers, respectively. Secondly, a local alignment is constructed with two designed graphs to guide the subspace decomposition in a pairwise way. Finally, the global discriminative constraint on distribution center in each view is designed for further alignment improvement. Extensive cross-view classification experiments on several public datasets prove that our proposed method is more effective than other existing feature learning methods.
- Research Article
6
- 10.3390/electronics11050810
- Mar 4, 2022
- Electronics
This paper proposes a novel robust latent common subspace learning (RLCSL) method by integrating low-rank and sparse constraints into a joint learning framework. Specifically, we transform the data from source and target domains into a latent common subspace to perform the data reconstruction, i.e., the transformed source data is used to reconstruct the transformed target data. We impose joint low-rank and sparse constraints on the reconstruction coefficient matrix which can achieve following objectives: (1) the data from different domains can be interlaced by using the low-rank constraint; (2) the data from different domains but with the same label can be aligned together by using the sparse constraint. In this way, the new feature representation in the latent common subspace is discriminative and transferable. To learn a suitable classifier, we also integrate the classifier learning and feature representation learning into a unified objective and thus the high-level semantics label (data label) is fully used to guide the learning process of these two tasks. Experiments are conducted on diverse data sets for image, object, and document classifications, and encouraging experimental results show that the proposed method outperforms some state-of-the-arts methods.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.