Discriminative Subspace Learning With Adaptive Graph Regularization

  • Abstract
  • Literature Map
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Abstract Many subspace learning methods based on low-rank representation employ the nearest neighborhood graph to preserve the local structure. However, in these methods, the nearest neighborhood graph is a binary matrix, which fails to precisely capture the similarity between distinct samples. Additionally, these methods need to manually select an appropriate number of neighbors, and they cannot adaptively update the similarity graph during projection learning. To tackle these issues, we introduce Discriminative Subspace Learning with Adaptive Graph Regularization (DSL_AGR), an innovative unsupervised subspace learning method that integrates low-rank representation, adaptive graph learning and nonnegative representation into a framework. DSL_AGR introduces a low-rank constraint to capture the global structure of the data and extract more discriminative information. Furthermore, a novel graph regularization term in DSL_AGR is guided by nonnegative representations to enhance the capability of capturing the local structure. Since closed-form solutions for the proposed method are not easily obtained, we devise an iterative optimization algorithm for its resolution. We also analyze the computational complexity and convergence of DSL_AGR. Extensive experiments on real-world datasets demonstrate that the proposed method achieves competitive performance compared with other state-of-the-art methods.

ReferencesShowing 10 of 34 papers
  • Open Access Icon
  • Cite Count Icon 49
  • 10.1016/j.sigpro.2020.107485
Low-rank discriminative least squares regression for image classification
  • Jan 21, 2020
  • Signal Processing
  • Zhe Chen + 2 more

  • Cite Count Icon 15
  • 10.1016/j.bspc.2022.103750
Novel cascade filter design of improved sparse low-rank matrix estimation and kernel adaptive filtering for ECG denoising and artifacts cancellation
  • May 5, 2022
  • Biomedical Signal Processing and Control
  • Ahmed S Eltrass

  • Cite Count Icon 320
  • 10.1109/tgrs.2012.2226730
Graph-Regularized Low-Rank Representation for Destriping of Hyperspectral Images
  • Jul 1, 2013
  • IEEE Transactions on Geoscience and Remote Sensing
  • Xiaoqiang Lu + 2 more

  • Cite Count Icon 131
  • 10.1016/j.neunet.2018.08.007
Low-rank representation with adaptive graph regularization
  • Aug 14, 2018
  • Neural Networks
  • Jie Wen + 4 more

  • Cite Count Icon 42
  • 10.1016/j.engappai.2020.103758
Discriminative sparse embedding based on adaptive graph for dimension reduction
  • Jun 18, 2020
  • Engineering Applications of Artificial Intelligence
  • Zhonghua Liu + 4 more

  • Cite Count Icon 54
  • 10.1016/j.patcog.2020.107758
Low-rank adaptive graph embedding for unsupervised feature extraction
  • Nov 20, 2020
  • Pattern Recognition
  • Jianglin Lu + 5 more

  • Cite Count Icon 410
  • 10.1109/tpami.2015.2462360
Laplacian Regularized Low-Rank Representation and Its Applications.
  • Mar 1, 2016
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Ming Yin + 2 more

  • Cite Count Icon 803
  • 10.1016/j.patcog.2009.05.005
Sparsity preserving projections with applications to face recognition
  • May 18, 2009
  • Pattern Recognition
  • Lishan Qiao + 2 more

  • Cite Count Icon 148
  • 10.1109/tip.2021.3068646
Generalized Nonconvex Low-Rank Tensor Approximation for Multi-View Subspace Clustering.
  • Jan 1, 2021
  • IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
  • Yongyong Chen + 4 more

  • Cite Count Icon 196
  • 10.1109/icdm.2007.89
Spectral Regression: A Unified Approach for Sparse Subspace Learning
  • Oct 1, 2007
  • Deng Cai + 2 more

Similar Papers
  • Research Article
  • Cite Count Icon 12
  • 10.1016/j.isatra.2015.12.011
Discriminative sparse subspace learning and its application to unsupervised feature selection
  • Jan 20, 2016
  • ISA Transactions
  • Nan Zhou + 4 more

Discriminative sparse subspace learning and its application to unsupervised feature selection

  • Research Article
  • Cite Count Icon 1
  • 10.1109/tim.2022.3187735
Locality-Aware Discriminative Subspace Learning for Image Classification
  • Jan 1, 2022
  • IEEE Transactions on Instrumentation and Measurement
  • Meenakshi + 1 more

Projection learning is an effective and widely used technique for extracting discriminative features for pattern recognition and classification. In projection learning, it is essential to preserve the global and local structure of the data while extracting discriminative features. However, transforming the source data directly to a target i.e., the strict binary label matrix, using a projection matrix may result in the loss of some intrinsic information. We propose a locality-aware discriminative subspace learning (LADSL) method to address these limitations. In LADSL, the original data is transformed into a latent space instead of a restrictive label space. The latent space seamlessly integrates the original visual features and the class labels to improve the classification performance. The projection matrix and classification parameters are jointly optimized to supervise the discriminative subspace learning. Additionally, LADSL exploits the adaptive local structure to preserve the nearest neighbor relationship among the data samples while learning more projections to achieve superior classification performance. Experiments have been carried out on various data sets for face and object recognition, and the results achieved are compared with the state-of-the-art methods to validate the effectiveness of the proposed LADSL method.

  • Research Article
  • Cite Count Icon 2
  • 10.1142/s0218001419510066
Discriminative Low-Rank Subspace Learning with Nonconvex Penalty
  • Sep 1, 2019
  • International Journal of Pattern Recognition and Artificial Intelligence
  • Kan Xie + 3 more

Subspace learning has been widely utilized to extract discriminative features for classification task, such as face recognition, even when facial images are occluded or corrupted. However, the performance of most existing methods would be degraded significantly in the scenario of that data being contaminated with severe noise, especially when the magnitude of the gross corruption can be arbitrarily large. To this end, in this paper, a novel discriminative subspace learning method is proposed based on the well-known low-rank representation (LRR). Specifically, a discriminant low-rank representation and the projecting subspace are learned simultaneously, in a supervised way. To avoid the deviation from the original solution by using some relaxation, we adopt the Schatten [Formula: see text]-norm and [Formula: see text]-norm, instead of the nuclear norm and [Formula: see text]-norm, respectively. Experimental results on two famous databases, i.e. PIE and ORL, demonstrate that the proposed method achieves better classification scores than the state-of-the-art approaches.

  • Research Article
  • Cite Count Icon 7
  • 10.1007/s11063-020-10340-6
Low-Rank Discriminative Adaptive Graph Preserving Subspace Learning
  • Sep 4, 2020
  • Neural Processing Letters
  • Haishun Du + 3 more

The global and local geometric structures of data play a key role in subspace learning. Although many manifold-based subspace learning methods have been proposed for preserving the local geometric structure of data, they usually use a predefined neighbor graph to characterize it. However, the predefined neighbor graph might be not optimal since it keeps fixed during the subsequent subspace learning process. Moreover, most manifold-based subspace learning methods ignore the global structure of data. To address these issues, we propose a low-rank discriminative adaptive graph preserving (LRDAGP) subspace learning method for image feature extraction and recognition by integrating the low-rank representation , adaptive manifold learning, and supervised regularizer into a unified framework. To capture the optimal local geometric structure of data for subspace learning, LRDAGP adopts an adaptive manifold learning strategy that the neighbor graph is adaptively updated during the subspace learning process. To capture the optimal global structure of data for subspace learning, LRDAGP also seeks the low-rank representations of data in a low-dimensional subspace during the subspace learning process. Moreover, for improving the discrimination ability of the learned subspace, a supervised regularizer is designed and incorporated into the LRDAGP model. Experimental results on several image datasets show that LRDAGP is effective for image feature extraction and recognition.

  • Research Article
  • Cite Count Icon 6
  • 10.1016/j.eswa.2024.123831
Discriminative sparse subspace learning with manifold regularization
  • Mar 26, 2024
  • Expert Systems with Applications
  • Wenyi Feng + 10 more

Discriminative sparse subspace learning with manifold regularization

  • Research Article
  • 10.1155/2022/5874722
A Cross-Media Advertising Design and Communication Model Based on Feature Subspace Learning
  • May 17, 2022
  • Computational Intelligence and Neuroscience
  • Shanshan Li

This paper uses feature subspace learning and cross-media retrieval analysis to construct an advertising design and communication model. To address the problems of the traditional feature subspace learning model and make the samples effectively maintain their local structure and discriminative properties after projection into the feature space, this paper proposes a discriminative feature subspace learning model based on Low-Rank Representation (LRR), which explores the local structure of samples through Low-Rank Representation and uses the representation coefficients as similarity constraints of samples in the projection space so that the projection subspace can better maintain the local nearest-neighbor relationship of samples. Based on the common subspace learning, this paper uses the extreme learning machine method to improve the cross-modal retrieval accuracy, mining deeper data features and maximizing the correlation between different modalities, so that the learned shared subspace is more discriminative; meanwhile, it proposes realizing cross-modal retrieval by the deep convolutional generative adversarial network, using unlabeled samples to further explore the correlation of different modal data and improve the cross-modal performance. The clustering quality of images and audios is corrected in the feature subspace obtained by dimensionality reduction through an optimization algorithm based on similarity transfer. Three active learning strategies are designed to calculate the conditional probability of unannotated samples around user-annotated samples in the correlation feedback process, thus improving the efficiency of cross-media retrieval in the case of limited feedback samples. The experimental results show that the method accurately measures the cross-media relevance and effectively achieves mutual retrieval between image and audio data. Through the study of cross-media advertising design and communication models based on feature subspace learning, it is of positive significance to advance commercial advertising design by guiding designers and artists to better utilize digital media technology for artistic design activities at the level of theoretical research and applied practice.

  • Research Article
  • Cite Count Icon 9
  • 10.1016/j.eswa.2021.116359
Graph-based adaptive and discriminative subspace learning for face image clustering
  • Dec 11, 2021
  • Expert Systems with Applications
  • Mengmeng Liao + 2 more

Graph-based adaptive and discriminative subspace learning for face image clustering

  • Research Article
  • Cite Count Icon 28
  • 10.1016/j.neucom.2021.02.002
Tensor low-rank sparse representation for tensor subspace learning
  • Feb 17, 2021
  • Neurocomputing
  • Shiqiang Du + 4 more

Tensor low-rank sparse representation for tensor subspace learning

  • Book Chapter
  • Cite Count Icon 1
  • 10.1007/978-3-319-27674-8_22
Discriminant Manifold Learning via Sparse Coding for Image Analysis
  • Jan 1, 2016
  • Meng Pang + 3 more

Traditional subspace learning methods directly calculate the statistical properties of the original input images, while ignoring different contributions of different image components. In fact, the noise (e.g., illumination, shadow) in the image often has a negative influence on learning the desired subspace and should have little contribution to image recognition. To tackle this problem, we propose a novel subspace learning method named Discriminant Manifold Learning via Sparse Coding (DML_SC). In our method, we first decompose the input image into several components via dictionary learning, and then regroup the components into a More Important Part (MIP) and a Less Important Part (LIP). The MIP can be regarded as the clean part of the original image residing on a nonlinear submanifold, while LIP as noise in the image. Finally, the MIP and LIP are incorporated into manifold learning to learn a desired discriminative subspace. The proposed method is able to deal with data with and without labels, yielding supervised and unsupervised DML SCs. Experimental results show that DML_SC achieves best performance on image recognition and clustering tasks compared with well-known subspace learning and sparse representation methods.

  • Research Article
  • Cite Count Icon 7
  • 10.1109/embc.2014.6944447
Unsupervised spike sorting based on discriminative subspace learning.
  • Aug 1, 2014
  • Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
  • Mohammad Reza Keshtkaran + 1 more

Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. In this paper, we present two unsupervised spike sorting algorithms based on discriminative subspace learning. The first algorithm simultaneously learns the discriminative feature subspace and performs clustering. It uses histogram of features in the most discriminative projection to detect the number of neurons. The second algorithm performs hierarchical divisive clustering that learns a discriminative 1-dimensional subspace for clustering in each level of the hierarchy until achieving almost unimodal distribution in the subspace. The algorithms are tested on synthetic and in-vivo data, and are compared against two widely used spike sorting methods. The comparative results demonstrate that our spike sorting methods can achieve substantially higher accuracy in lower dimensional feature space, and they are highly robust to noise. Moreover, they provide significantly better cluster separability in the learned subspace than in the subspace obtained by principal component analysis or wavelet transform.

  • Research Article
  • Cite Count Icon 13
  • 10.1371/journal.pone.0215450
Subspace structural constraint-based discriminative feature learning via nonnegative low rank representation.
  • May 7, 2019
  • PLOS ONE
  • Ao Li + 6 more

Feature subspace learning plays a significant role in pattern recognition, and many efforts have been made to generate increasingly discriminative learning models. Recently, several discriminative feature learning methods based on a representation model have been proposed, which have not only attracted considerable attention but also achieved success in practical applications. Nevertheless, these methods for constructing the learning model simply depend on the class labels of the training instances and fail to consider the essential subspace structural information hidden in them. In this paper, we propose a robust feature subspace learning approach based on a low-rank representation. In our approach, the low-rank representation coefficients are considered as weights to construct the constraint item for feature learning, which can introduce a subspace structural similarity constraint in the proposed learning model for facilitating data adaptation and robustness. Moreover, by placing the subspace learning and low-rank representation into a unified framework, they can benefit each other during the iteration process to realize an overall optimum. To achieve extra discrimination, linear regression is also incorporated into our model to enforce the projection features around and close to their label-based centers. Furthermore, an iterative numerical scheme is designed to solve our proposed objective function and ensure convergence. Extensive experimental results obtained using several public image datasets demonstrate the advantages and effectiveness of our novel approach compared with those of the existing methods.

  • Research Article
  • Cite Count Icon 66
  • 10.1016/j.neunet.2014.01.001
Similarity preserving low-rank representation for enhanced data representation and effective subspace learning
  • Feb 10, 2014
  • Neural Networks
  • Zhao Zhang + 2 more

Similarity preserving low-rank representation for enhanced data representation and effective subspace learning

  • Conference Article
  • 10.1109/icip.2015.7351544
Attribute constrained subspace learning
  • Sep 1, 2015
  • Mohammadreza Babaee + 4 more

Visual attributes are high-level semantic descriptions of visual data that are close to the human language. They have been used intensively in various applications such as image classification, active learning, and interactive search. However, the usage of attributes in subspace learning (or dimensionality reduction) has not been considered yet. In this work, we propose to utilize relative attributes as semantic cues in subspace learning. To this end, we employ Non-negative Matrix Factorization (NMF) constrained by embedded relative attributes to learn a subspace representation of image content. Experiments conducted on two datasets show the efficiency of attributes in discriminative subspace learning.

  • Research Article
  • Cite Count Icon 98
  • 10.1109/tip.2007.914203
Image Classification Using Correlation Tensor Analysis
  • Feb 1, 2008
  • IEEE Transactions on Image Processing
  • Yun Fu + 1 more

Images, as high-dimensional data, usually embody large variabilities. To classify images for versatile applications, an effective algorithm is necessarily designed by systematically considering the data structure, similarity metric, discriminant subspace, and classifier. In this paper, we provide evidence that, besides the Fisher criterion, graph embedding, and tensorization used in many existing methods, the correlation-based similarity metric embodied in supervised multilinear discriminant subspace learning can additionally improve the classification performance. In particular, a novel discriminant subspace learning algorithm, called correlation tensor analysis (CTA), is designed to incorporate both graph-embedded correlational mapping and discriminant analysis in a Fisher type of learning manner. The correlation metric can estimate intrinsic angles and distances for the locally isometric embedding, which can deal with the case when Euclidean metric is incapable of capturing the intrinsic similarities between data points. CTA learns multiple interrelated subspaces to obtain a low-dimensional data representation reflecting both class label information and intrinsic geometric structure of the data distribution. Extensive comparisons with most popular subspace learning methods on face recognition evaluation demonstrate the effectiveness and superiority of CTA. Parameter analysis also reveals its robustness.

  • PDF Download Icon
  • Research Article
  • 10.1155/2020/8872348
Bio-Inspired Structure Representation Based Cross-View Discriminative Subspace Learning via Simultaneous Local and Global Alignment
  • Nov 4, 2020
  • Complexity
  • Ao Li + 5 more

Recently, cross-view feature learning has been a hot topic in machine learning due to the wide applications of multiview data. Nevertheless, the distribution discrepancy between cross-views leads to the fact that instances of the different views from same class are farther than those within the same view but from different classes. To address this problem, in this paper, we develop a novel cross-view discriminative feature subspace learning method inspired by layered visual perception from human. Firstly, the proposed method utilizes a separable low-rank self-representation model to disentangle the class and view structure layers, respectively. Secondly, a local alignment is constructed with two designed graphs to guide the subspace decomposition in a pairwise way. Finally, the global discriminative constraint on distribution center in each view is designed for further alignment improvement. Extensive cross-view classification experiments on several public datasets prove that our proposed method is more effective than other existing feature learning methods.

More from: The Computer Journal
  • New
  • Research Article
  • 10.1093/comjnl/bxaf132
Enhancing conversational agent responses with EXLNetT using learnable enhanced Laplacian kernel attention mechanism, deep bi-affine network, and hybrid positional encoding
  • Nov 9, 2025
  • The Computer Journal
  • N Muthukumaran + 1 more

  • Research Article
  • 10.1093/comjnl/bxaf124
CV content recognition using YOLOv8 and Tesseract-OCR deep learning
  • Oct 27, 2025
  • The Computer Journal
  • Amany M Sarhan + 6 more

  • Research Article
  • 10.1093/comjnl/bxaf118
A fault diagnosis method for systems based on labeled time Petri nets with tables
  • Oct 21, 2025
  • The Computer Journal
  • Jian Song + 1 more

  • Research Article
  • 10.1093/comjnl/bxaf123
AMOS2: adaptive multi-objective seed schedule in gray-box fuzzing
  • Oct 21, 2025
  • The Computer Journal
  • Weihua Jiao + 4 more

  • Research Article
  • 10.1093/comjnl/bxaf119
ECaps-GTR: optimizing spatiotemporal EEG emotion recognition via the augmented capsule-gated transformer
  • Oct 18, 2025
  • The Computer Journal
  • Xiaoliang Wang + 5 more

  • Research Article
  • 10.1093/comjnl/bxaf116
Privacy-preserving label-constrained reachability queries for large graphs in cloud environments
  • Oct 13, 2025
  • The Computer Journal
  • Zenglu Li + 3 more

  • Research Article
  • 10.1093/comjnl/bxaf109
A non-dominated archived multi-objective harmony search algorithm for identifying influencers in social networks
  • Oct 8, 2025
  • The Computer Journal
  • Taniya Chatterjee + 4 more

  • Research Article
  • 10.1093/comjnl/bxaf114
Joint optimization for collaborative data collection in wireless sensor networks with multi-UAV and multi-MUV
  • Sep 29, 2025
  • The Computer Journal
  • Yu Lu + 4 more

  • Research Article
  • 10.1093/comjnl/bxaf106
H2Sketch: real-time H-value measurement of key flows in high-speed networks
  • Sep 9, 2025
  • The Computer Journal
  • Jun Xu + 3 more

  • Research Article
  • 10.1093/comjnl/bxaf104
Quadruplet network based template attack
  • Sep 6, 2025
  • The Computer Journal
  • Xiaonian Wu + 4 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon