Discriminative Subspace Learning With Adaptive Graph Regularization
Abstract Many subspace learning methods based on low-rank representation employ the nearest neighborhood graph to preserve the local structure. However, in these methods, the nearest neighborhood graph is a binary matrix, which fails to precisely capture the similarity between distinct samples. Additionally, these methods need to manually select an appropriate number of neighbors, and they cannot adaptively update the similarity graph during projection learning. To tackle these issues, we introduce Discriminative Subspace Learning with Adaptive Graph Regularization (DSL_AGR), an innovative unsupervised subspace learning method that integrates low-rank representation, adaptive graph learning and nonnegative representation into a framework. DSL_AGR introduces a low-rank constraint to capture the global structure of the data and extract more discriminative information. Furthermore, a novel graph regularization term in DSL_AGR is guided by nonnegative representations to enhance the capability of capturing the local structure. Since closed-form solutions for the proposed method are not easily obtained, we devise an iterative optimization algorithm for its resolution. We also analyze the computational complexity and convergence of DSL_AGR. Extensive experiments on real-world datasets demonstrate that the proposed method achieves competitive performance compared with other state-of-the-art methods.
49
- 10.1016/j.sigpro.2020.107485
- Jan 21, 2020
- Signal Processing
15
- 10.1016/j.bspc.2022.103750
- May 5, 2022
- Biomedical Signal Processing and Control
320
- 10.1109/tgrs.2012.2226730
- Jul 1, 2013
- IEEE Transactions on Geoscience and Remote Sensing
131
- 10.1016/j.neunet.2018.08.007
- Aug 14, 2018
- Neural Networks
42
- 10.1016/j.engappai.2020.103758
- Jun 18, 2020
- Engineering Applications of Artificial Intelligence
54
- 10.1016/j.patcog.2020.107758
- Nov 20, 2020
- Pattern Recognition
410
- 10.1109/tpami.2015.2462360
- Mar 1, 2016
- IEEE Transactions on Pattern Analysis and Machine Intelligence
803
- 10.1016/j.patcog.2009.05.005
- May 18, 2009
- Pattern Recognition
148
- 10.1109/tip.2021.3068646
- Jan 1, 2021
- IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
196
- 10.1109/icdm.2007.89
- Oct 1, 2007
- Research Article
12
- 10.1016/j.isatra.2015.12.011
- Jan 20, 2016
- ISA Transactions
Discriminative sparse subspace learning and its application to unsupervised feature selection
- Research Article
1
- 10.1109/tim.2022.3187735
- Jan 1, 2022
- IEEE Transactions on Instrumentation and Measurement
Projection learning is an effective and widely used technique for extracting discriminative features for pattern recognition and classification. In projection learning, it is essential to preserve the global and local structure of the data while extracting discriminative features. However, transforming the source data directly to a target i.e., the strict binary label matrix, using a projection matrix may result in the loss of some intrinsic information. We propose a locality-aware discriminative subspace learning (LADSL) method to address these limitations. In LADSL, the original data is transformed into a latent space instead of a restrictive label space. The latent space seamlessly integrates the original visual features and the class labels to improve the classification performance. The projection matrix and classification parameters are jointly optimized to supervise the discriminative subspace learning. Additionally, LADSL exploits the adaptive local structure to preserve the nearest neighbor relationship among the data samples while learning more projections to achieve superior classification performance. Experiments have been carried out on various data sets for face and object recognition, and the results achieved are compared with the state-of-the-art methods to validate the effectiveness of the proposed LADSL method.
- Research Article
2
- 10.1142/s0218001419510066
- Sep 1, 2019
- International Journal of Pattern Recognition and Artificial Intelligence
Subspace learning has been widely utilized to extract discriminative features for classification task, such as face recognition, even when facial images are occluded or corrupted. However, the performance of most existing methods would be degraded significantly in the scenario of that data being contaminated with severe noise, especially when the magnitude of the gross corruption can be arbitrarily large. To this end, in this paper, a novel discriminative subspace learning method is proposed based on the well-known low-rank representation (LRR). Specifically, a discriminant low-rank representation and the projecting subspace are learned simultaneously, in a supervised way. To avoid the deviation from the original solution by using some relaxation, we adopt the Schatten [Formula: see text]-norm and [Formula: see text]-norm, instead of the nuclear norm and [Formula: see text]-norm, respectively. Experimental results on two famous databases, i.e. PIE and ORL, demonstrate that the proposed method achieves better classification scores than the state-of-the-art approaches.
- Research Article
7
- 10.1007/s11063-020-10340-6
- Sep 4, 2020
- Neural Processing Letters
The global and local geometric structures of data play a key role in subspace learning. Although many manifold-based subspace learning methods have been proposed for preserving the local geometric structure of data, they usually use a predefined neighbor graph to characterize it. However, the predefined neighbor graph might be not optimal since it keeps fixed during the subsequent subspace learning process. Moreover, most manifold-based subspace learning methods ignore the global structure of data. To address these issues, we propose a low-rank discriminative adaptive graph preserving (LRDAGP) subspace learning method for image feature extraction and recognition by integrating the low-rank representation , adaptive manifold learning, and supervised regularizer into a unified framework. To capture the optimal local geometric structure of data for subspace learning, LRDAGP adopts an adaptive manifold learning strategy that the neighbor graph is adaptively updated during the subspace learning process. To capture the optimal global structure of data for subspace learning, LRDAGP also seeks the low-rank representations of data in a low-dimensional subspace during the subspace learning process. Moreover, for improving the discrimination ability of the learned subspace, a supervised regularizer is designed and incorporated into the LRDAGP model. Experimental results on several image datasets show that LRDAGP is effective for image feature extraction and recognition.
- Research Article
6
- 10.1016/j.eswa.2024.123831
- Mar 26, 2024
- Expert Systems with Applications
Discriminative sparse subspace learning with manifold regularization
- Research Article
- 10.1155/2022/5874722
- May 17, 2022
- Computational Intelligence and Neuroscience
This paper uses feature subspace learning and cross-media retrieval analysis to construct an advertising design and communication model. To address the problems of the traditional feature subspace learning model and make the samples effectively maintain their local structure and discriminative properties after projection into the feature space, this paper proposes a discriminative feature subspace learning model based on Low-Rank Representation (LRR), which explores the local structure of samples through Low-Rank Representation and uses the representation coefficients as similarity constraints of samples in the projection space so that the projection subspace can better maintain the local nearest-neighbor relationship of samples. Based on the common subspace learning, this paper uses the extreme learning machine method to improve the cross-modal retrieval accuracy, mining deeper data features and maximizing the correlation between different modalities, so that the learned shared subspace is more discriminative; meanwhile, it proposes realizing cross-modal retrieval by the deep convolutional generative adversarial network, using unlabeled samples to further explore the correlation of different modal data and improve the cross-modal performance. The clustering quality of images and audios is corrected in the feature subspace obtained by dimensionality reduction through an optimization algorithm based on similarity transfer. Three active learning strategies are designed to calculate the conditional probability of unannotated samples around user-annotated samples in the correlation feedback process, thus improving the efficiency of cross-media retrieval in the case of limited feedback samples. The experimental results show that the method accurately measures the cross-media relevance and effectively achieves mutual retrieval between image and audio data. Through the study of cross-media advertising design and communication models based on feature subspace learning, it is of positive significance to advance commercial advertising design by guiding designers and artists to better utilize digital media technology for artistic design activities at the level of theoretical research and applied practice.
- Research Article
9
- 10.1016/j.eswa.2021.116359
- Dec 11, 2021
- Expert Systems with Applications
Graph-based adaptive and discriminative subspace learning for face image clustering
- Research Article
28
- 10.1016/j.neucom.2021.02.002
- Feb 17, 2021
- Neurocomputing
Tensor low-rank sparse representation for tensor subspace learning
- Book Chapter
1
- 10.1007/978-3-319-27674-8_22
- Jan 1, 2016
Traditional subspace learning methods directly calculate the statistical properties of the original input images, while ignoring different contributions of different image components. In fact, the noise (e.g., illumination, shadow) in the image often has a negative influence on learning the desired subspace and should have little contribution to image recognition. To tackle this problem, we propose a novel subspace learning method named Discriminant Manifold Learning via Sparse Coding (DML_SC). In our method, we first decompose the input image into several components via dictionary learning, and then regroup the components into a More Important Part (MIP) and a Less Important Part (LIP). The MIP can be regarded as the clean part of the original image residing on a nonlinear submanifold, while LIP as noise in the image. Finally, the MIP and LIP are incorporated into manifold learning to learn a desired discriminative subspace. The proposed method is able to deal with data with and without labels, yielding supervised and unsupervised DML SCs. Experimental results show that DML_SC achieves best performance on image recognition and clustering tasks compared with well-known subspace learning and sparse representation methods.
- Research Article
7
- 10.1109/embc.2014.6944447
- Aug 1, 2014
- Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. In this paper, we present two unsupervised spike sorting algorithms based on discriminative subspace learning. The first algorithm simultaneously learns the discriminative feature subspace and performs clustering. It uses histogram of features in the most discriminative projection to detect the number of neurons. The second algorithm performs hierarchical divisive clustering that learns a discriminative 1-dimensional subspace for clustering in each level of the hierarchy until achieving almost unimodal distribution in the subspace. The algorithms are tested on synthetic and in-vivo data, and are compared against two widely used spike sorting methods. The comparative results demonstrate that our spike sorting methods can achieve substantially higher accuracy in lower dimensional feature space, and they are highly robust to noise. Moreover, they provide significantly better cluster separability in the learned subspace than in the subspace obtained by principal component analysis or wavelet transform.
- Research Article
13
- 10.1371/journal.pone.0215450
- May 7, 2019
- PLOS ONE
Feature subspace learning plays a significant role in pattern recognition, and many efforts have been made to generate increasingly discriminative learning models. Recently, several discriminative feature learning methods based on a representation model have been proposed, which have not only attracted considerable attention but also achieved success in practical applications. Nevertheless, these methods for constructing the learning model simply depend on the class labels of the training instances and fail to consider the essential subspace structural information hidden in them. In this paper, we propose a robust feature subspace learning approach based on a low-rank representation. In our approach, the low-rank representation coefficients are considered as weights to construct the constraint item for feature learning, which can introduce a subspace structural similarity constraint in the proposed learning model for facilitating data adaptation and robustness. Moreover, by placing the subspace learning and low-rank representation into a unified framework, they can benefit each other during the iteration process to realize an overall optimum. To achieve extra discrimination, linear regression is also incorporated into our model to enforce the projection features around and close to their label-based centers. Furthermore, an iterative numerical scheme is designed to solve our proposed objective function and ensure convergence. Extensive experimental results obtained using several public image datasets demonstrate the advantages and effectiveness of our novel approach compared with those of the existing methods.
- Research Article
66
- 10.1016/j.neunet.2014.01.001
- Feb 10, 2014
- Neural Networks
Similarity preserving low-rank representation for enhanced data representation and effective subspace learning
- Conference Article
- 10.1109/icip.2015.7351544
- Sep 1, 2015
Visual attributes are high-level semantic descriptions of visual data that are close to the human language. They have been used intensively in various applications such as image classification, active learning, and interactive search. However, the usage of attributes in subspace learning (or dimensionality reduction) has not been considered yet. In this work, we propose to utilize relative attributes as semantic cues in subspace learning. To this end, we employ Non-negative Matrix Factorization (NMF) constrained by embedded relative attributes to learn a subspace representation of image content. Experiments conducted on two datasets show the efficiency of attributes in discriminative subspace learning.
- Research Article
98
- 10.1109/tip.2007.914203
- Feb 1, 2008
- IEEE Transactions on Image Processing
Images, as high-dimensional data, usually embody large variabilities. To classify images for versatile applications, an effective algorithm is necessarily designed by systematically considering the data structure, similarity metric, discriminant subspace, and classifier. In this paper, we provide evidence that, besides the Fisher criterion, graph embedding, and tensorization used in many existing methods, the correlation-based similarity metric embodied in supervised multilinear discriminant subspace learning can additionally improve the classification performance. In particular, a novel discriminant subspace learning algorithm, called correlation tensor analysis (CTA), is designed to incorporate both graph-embedded correlational mapping and discriminant analysis in a Fisher type of learning manner. The correlation metric can estimate intrinsic angles and distances for the locally isometric embedding, which can deal with the case when Euclidean metric is incapable of capturing the intrinsic similarities between data points. CTA learns multiple interrelated subspaces to obtain a low-dimensional data representation reflecting both class label information and intrinsic geometric structure of the data distribution. Extensive comparisons with most popular subspace learning methods on face recognition evaluation demonstrate the effectiveness and superiority of CTA. Parameter analysis also reveals its robustness.
- Research Article
- 10.1155/2020/8872348
- Nov 4, 2020
- Complexity
Recently, cross-view feature learning has been a hot topic in machine learning due to the wide applications of multiview data. Nevertheless, the distribution discrepancy between cross-views leads to the fact that instances of the different views from same class are farther than those within the same view but from different classes. To address this problem, in this paper, we develop a novel cross-view discriminative feature subspace learning method inspired by layered visual perception from human. Firstly, the proposed method utilizes a separable low-rank self-representation model to disentangle the class and view structure layers, respectively. Secondly, a local alignment is constructed with two designed graphs to guide the subspace decomposition in a pairwise way. Finally, the global discriminative constraint on distribution center in each view is designed for further alignment improvement. Extensive cross-view classification experiments on several public datasets prove that our proposed method is more effective than other existing feature learning methods.
- New
- Research Article
- 10.1093/comjnl/bxaf132
- Nov 9, 2025
- The Computer Journal
- Research Article
- 10.1093/comjnl/bxaf124
- Oct 27, 2025
- The Computer Journal
- Research Article
- 10.1093/comjnl/bxaf118
- Oct 21, 2025
- The Computer Journal
- Research Article
- 10.1093/comjnl/bxaf123
- Oct 21, 2025
- The Computer Journal
- Research Article
- 10.1093/comjnl/bxaf119
- Oct 18, 2025
- The Computer Journal
- Research Article
- 10.1093/comjnl/bxaf116
- Oct 13, 2025
- The Computer Journal
- Research Article
- 10.1093/comjnl/bxaf109
- Oct 8, 2025
- The Computer Journal
- Research Article
- 10.1093/comjnl/bxaf114
- Sep 29, 2025
- The Computer Journal
- Research Article
- 10.1093/comjnl/bxaf106
- Sep 9, 2025
- The Computer Journal
- Research Article
- 10.1093/comjnl/bxaf104
- Sep 6, 2025
- The Computer Journal
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.