Locality-Aware Discriminative Subspace Learning for Image Classification
Projection learning is an effective and widely used technique for extracting discriminative features for pattern recognition and classification. In projection learning, it is essential to preserve the global and local structure of the data while extracting discriminative features. However, transforming the source data directly to a target i.e., the strict binary label matrix, using a projection matrix may result in the loss of some intrinsic information. We propose a locality-aware discriminative subspace learning (LADSL) method to address these limitations. In LADSL, the original data is transformed into a latent space instead of a restrictive label space. The latent space seamlessly integrates the original visual features and the class labels to improve the classification performance. The projection matrix and classification parameters are jointly optimized to supervise the discriminative subspace learning. Additionally, LADSL exploits the adaptive local structure to preserve the nearest neighbor relationship among the data samples while learning more projections to achieve superior classification performance. Experiments have been carried out on various data sets for face and object recognition, and the results achieved are compared with the state-of-the-art methods to validate the effectiveness of the proposed LADSL method.
- Research Article
6
- 10.1016/j.eswa.2024.123831
- Mar 26, 2024
- Expert Systems with Applications
Discriminative sparse subspace learning with manifold regularization
- Research Article
- 10.1093/comjnl/bxae049
- Jun 10, 2024
- The Computer Journal
Many subspace learning methods based on low-rank representation employ the nearest neighborhood graph to preserve the local structure. However, in these methods, the nearest neighborhood graph is a binary matrix, which fails to precisely capture the similarity between distinct samples. Additionally, these methods need to manually select an appropriate number of neighbors, and they cannot adaptively update the similarity graph during projection learning. To tackle these issues, we introduce Discriminative Subspace Learning with Adaptive Graph Regularization (DSL_AGR), an innovative unsupervised subspace learning method that integrates low-rank representation, adaptive graph learning and nonnegative representation into a framework. DSL_AGR introduces a low-rank constraint to capture the global structure of the data and extract more discriminative information. Furthermore, a novel graph regularization term in DSL_AGR is guided by nonnegative representations to enhance the capability of capturing the local structure. Since closed-form solutions for the proposed method are not easily obtained, we devise an iterative optimization algorithm for its resolution. We also analyze the computational complexity and convergence of DSL_AGR. Extensive experiments on real-world datasets demonstrate that the proposed method achieves competitive performance compared with other state-of-the-art methods.
- Research Article
12
- 10.1016/j.isatra.2015.12.011
- Jan 20, 2016
- ISA Transactions
Discriminative sparse subspace learning and its application to unsupervised feature selection
- Research Article
40
- 10.1016/j.neucom.2019.01.069
- Feb 1, 2019
- Neurocomputing
Structure preservation and distribution alignment in discriminative transfer subspace learning
- Research Article
3
- 10.1049/iet-bmt.2019.0104
- Mar 5, 2020
- IET Biometrics
Considering human ageing has a big impact on cross-age face recognition, and the effect of ageing on face recognition in non-ideal images has not been well addressed yet. In this study, the authors propose a discriminative common feature subspace learning method to deal with the problem. Specifically, they consider the samples of the same individual with big age gaps have different distributions in the original space, and employ the maximum mean discrepancy as the distance measure to compute the distances between the sample means of the different distributions. Then the distance measure is integrated into Fisher criterion to learn a discriminative common feature subspace. The aim is to map the images with different ages to the common subspace, and to construct new feature representation which is robust to age variations and discriminative to different subjects. To evaluate the performance of the proposed method on cross-age face recognition, the authors construct extensive experiments on CACD and FG-Net databases. Experimental results show that the proposed method outperforms other subspace based methods and state-of-art cross-age face recognition methods.
- Research Article
2
- 10.1007/s40009-017-0543-8
- Apr 20, 2017
- National Academy Science Letters
In the past few years, face attributes have attracted much attention. In this paper, for the first time we combine the discriminant subspace learning technique with the idea of pattern reconstruction to build a face attribute classification framework. For the attribute considered, the framework firstly learns an attribute subspace by using a discriminant subspace learning method, which also has the capability of pattern reconstruction. The framework then reconstructs the attribute state of input query image with the learned subspace, and classifies face attribute based on minimum reconstruction error. By repeatedly using the classification framework for different attributes, we can achieve multiple classification results output. According to the output, we select matching objects for each given query image based on generalized hamming distance to realize face recognition. The proposed attribute classification framework and face recognition approach are validated on the public AR and Weizmann face databases. Experimental results demonstrate their effectiveness as compared with several related methods.
- Research Article
9
- 10.1016/j.eswa.2021.116359
- Dec 11, 2021
- Expert Systems with Applications
Graph-based adaptive and discriminative subspace learning for face image clustering
- Research Article
- 10.1155/2020/8872348
- Nov 4, 2020
- Complexity
Recently, cross-view feature learning has been a hot topic in machine learning due to the wide applications of multiview data. Nevertheless, the distribution discrepancy between cross-views leads to the fact that instances of the different views from same class are farther than those within the same view but from different classes. To address this problem, in this paper, we develop a novel cross-view discriminative feature subspace learning method inspired by layered visual perception from human. Firstly, the proposed method utilizes a separable low-rank self-representation model to disentangle the class and view structure layers, respectively. Secondly, a local alignment is constructed with two designed graphs to guide the subspace decomposition in a pairwise way. Finally, the global discriminative constraint on distribution center in each view is designed for further alignment improvement. Extensive cross-view classification experiments on several public datasets prove that our proposed method is more effective than other existing feature learning methods.
- Research Article
2
- 10.1142/s0218001419510066
- Sep 1, 2019
- International Journal of Pattern Recognition and Artificial Intelligence
Subspace learning has been widely utilized to extract discriminative features for classification task, such as face recognition, even when facial images are occluded or corrupted. However, the performance of most existing methods would be degraded significantly in the scenario of that data being contaminated with severe noise, especially when the magnitude of the gross corruption can be arbitrarily large. To this end, in this paper, a novel discriminative subspace learning method is proposed based on the well-known low-rank representation (LRR). Specifically, a discriminant low-rank representation and the projecting subspace are learned simultaneously, in a supervised way. To avoid the deviation from the original solution by using some relaxation, we adopt the Schatten [Formula: see text]-norm and [Formula: see text]-norm, instead of the nuclear norm and [Formula: see text]-norm, respectively. Experimental results on two famous databases, i.e. PIE and ORL, demonstrate that the proposed method achieves better classification scores than the state-of-the-art approaches.
- Research Article
4
- 10.1007/s11063-018-9951-0
- Nov 10, 2018
- Neural Processing Letters
Different views of one object usually represent different aspects of the object, and a single view is unlikely to comprehensively describe the object. In multi-view learning, comprehensive utilization of multi-view information is helpful. In this paper, we propose a novel supervised latent subspace learning method called multi-view intact discriminant space learning (MIDSL) by efficiently integrating complementary multi-view information of different views. MIDSL learns a latent intact discriminant space by employing Fisher discrimination criterion to fully use class label information, which can well guide exploiting useful discriminant information, of labeled training samples. MIDSL can simultaneously minimize the within-class scatter and maximize the between-class scatter of the feature representations of different objects in the learned latent intact discriminant space. Aiming to utilize unlabeled samples to help mining more useful information for better learning latent intact discriminant space, we extend MIDSL method in semi-supervised scenario and propose semi-supervised multi-view intact discriminant space learning (SMIDSL) method. We further extend MIDSL and SMIDSL methods by kernel technique and propose kernelized multi-view intact discriminant space learning (KMIDSL) and kernelized semi-supervised multi-view intact discriminant space learning (KSMIDSL) methods. Experimental results on Caltech 101, LFW, MNIST and RGB-D datasets demonstrate the effectiveness of our proposed methods.
- Conference Article
101
- 10.1145/1390156.1390284
- Jan 1, 2008
Bayesian network classifiers have been widely used for classification problems. Given a fixed Bayesian network structure, parameters learning can take two different approaches: generative and discriminative learning. While generative parameter learning is more efficient, discriminative parameter learning is more effective. In this paper, we propose a simple, efficient, and effective discriminative parameter learning method, called Discriminative Frequency Estimate (DFE), which learns parameters by discriminatively computing frequencies from data. Empirical studies show that the DFE algorithm integrates the advantages of both generative and discriminative learning: it performs as well as the state-of-the-art discriminative parameter learning method ELR in accuracy, but is significantly more efficient.
- Research Article
7
- 10.1007/s12065-019-00211-y
- Feb 23, 2019
- Evolutionary Intelligence
Transfer learning has gained more attention recently by utilizing knowledge acquired from one domain to advance a learning performance in another domain. Existing homogeneous transfer learning methods have progressed to a point where feature spaces are common in training and testing domains. However, heterogeneous transfer learning is still in its nascent stage where features of training and testing domains are different. Taking this into account, Bregman Divergence Regularization is used to minimize the probability distribution difference between training and testing domains and to take them together to a shared subspace. To discriminate data within individual domains, a projection matrix is obtained using Fisher Linear Discriminant Analysis subspace learning algorithm. Experimentation is performed on two efficiently used biometrics: the face and fingerprint. Two types of cross-domain settings are used: (1) Face + Finger2Finger where training samples come from face (labeled samples) and fingerprint (unlabeled samples) data sets, while testing is performed on a fingerprint dataset. (2) Finger + Face2Face where training samples come from fingerprint (labeled samples) and face (unlabeled samples) data sets while testing is performed on a face dataset. This paper proposes a cross domain association between face and fingerprint that finds utility in forensic applications.
- Research Article
19
- 10.1007/s10489-019-01610-5
- Feb 26, 2020
- Applied Intelligence
In traditional machine learning algorithms, the classification models are learned on the training data (source domain) to reuse for labelling the test data (target domain) where the training and test samples are from the same distributions. However in nowadays applications, the existence of distribution shift across the source and target doamins degrades the model performance, significantly. Domain adaptation methods have been proposed to compensate domain shift problem by aligning the distributions across the source and target domains under various adaptation strategies. This paper addresses the robust image classification problem for unsupervised domain adaptation. Specifically, following three methods are proposed: Discriminative Subspace Learning (DSL), Joint Geometrical and Statistical Distribution Adaptation (GSDA), and Joint Subspace and Distribution Adaptation (DSL-GSDA). DSL is a subspace centric method that aligns the specific and shared features across domains. Indeed, DSL finds two projections to map the source and target data into independent subspaces by aligning the discriminant and global structures of domains. GSDA trends to find an adaptive classifier through statistical and geometrical distribution alignment and minimizes the prediction error. DSL-GSDA, as a combination of DSL and GSDA, consists of two subspace and distribution adaptation levels. DSL-GSDA uses DSL to build two aligned subspaces of source and target domains. The distributions of source and target data in new subspaces is adapted via GSDA. The proposed methods are evaluated on benchmark visual datasets for object, digit and face recongnition tasks. Visual datasets consist of image domains that have been captured under various real-world conditions where the domain shift is unavoidable. The experiment results show that DSL, GSDA and DSL-GSDA outperform other state-of-the-art domain adaptation methods by 6.19%, 1.48% and 1.99% improvement, respectively. Our source code is available at https://github.com/jtahmores/DSLGSDA (https://github.com/jtahmores/DSLGSDA).
- Research Article
1
- 10.1007/s00500-022-07333-z
- Jul 14, 2022
- Soft Computing
Human age estimation from facial images has become an active research topic in computer vision field because of various real-world applications. Temporal property of facial aging display sequential patterns that lie on the low-dimensional aging manifold. In this paper, we propose hidden factor analysis (HFA) model-based discriminative manifold learning method for age estimation. The hidden factor analysis decomposes facial features into independent age factor and identity factor. Various age invariant face recognition systems in the literature utilize identity factor for face recognition; however, the age factor remains unutilized. The age component of the hidden factor analysis model depends on the subject’s age. Thus it carries significant age-related information. In this paper, we demonstrate that such aging patterns can be effectively extracted from the HFA-based discriminant subspace learning algorithm. Next, we have applied multiple regression methods on low-dimensional aging features learned from the HFA model. Effect of reduced dimensionality on the accuracy has been evaluated by extensive experiments and compared with the state-of-the-art methods. Effectiveness and robustness in terms of MAE and CS of the proposed framework are demonstrated using experimental analysis on a large-scale aging database MORPH II. The accuracy of our method is found superior to the current state-of-the-art methods.
- Research Article
8
- 10.1016/j.neunet.2018.11.003
- Jan 11, 2019
- Neural Networks
Unsupervised robust discriminative manifold embedding with self-expressiveness
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.