Published in last 50 years
Articles published on Subspace Learning Method
- Research Article
3
- 10.1109/tnnls.2024.3420738
- May 1, 2025
- IEEE transactions on neural networks and learning systems
- Songtao Li + 4 more
Graph regularized nonnegative matrix factorization (GNMF) has been widely used in data representation due to its excellent dimensionality reduction. When it comes to clustering polluted data, GNMF inevitably learns inaccurate representations, leading to models that are unusually sensitive to outliers in the data. For example, in a face dataset, obscured by items such as a mask or glasses, there is a high probability that the graph regularization term incorrectly describes the association relationship for that sample, resulting in an incorrect elicitation in the matrix factorization process. In this article, a novel self-initiated unsupervised subspace learning method named robust nonnegative matrix factorization with self-initiated multigraph contrastive fusion (RNMF-SMGF) is proposed. RNMF-SMGF is capable of creating samples with different angles and learning different graph structures based on these different angles in a self-initiated method without changing the original data. In the process of subspace learning guided by graph regularization, these different graph structures are fused into a more accurate graph structure, along with entropy regularization, $L_{2,1/2}$ -norm constraints to facilitate the robust learning of the proposed model and the formation of different clusters in the low-dimensional space. To demonstrate the effectiveness of the proposed model in robust clustering, we have conducted extensive experiments on several benchmark datasets and demonstrated the effectiveness of the proposed method. The source code is available at: https://github.com/LstinWh/RNMF-SMGF/.
- Research Article
4
- 10.1109/tpami.2024.3446537
- Dec 1, 2024
- IEEE transactions on pattern analysis and machine intelligence
- Wei Chang + 4 more
Multi-view learning has raised more and more attention in recent years. However, traditional approaches only focus on the difference while ignoring the consistency among views. It may make some views, with the situation of data abnormality or noise, ineffective in the progress of view learning. Besides, the current datasets have become high-dimensional and large-scale gradually. Therefore, this paper proposes a novel multi-view compressed subspace learning method via low-rank tensor constraint, which incorporates the clustering progress and multi-view learning into a unified framework. First, for each view, we take the partial samples to build a small-size dictionary, which can reduce the effect of both redundancy information and computation cost greatly. Then, to find the consistency and difference among views, we impose a low-rank tensor constraint on these representations and further design an auto-weighted mechanism to learn the optimal representation. Last, due to the non-square of the learned representation, the bipartite graph has been introduced, and under the structured constraint, the clustering results can be obtained directly from this graph without any post-processing. Extensive experiments on synthetic and real-world benchmark datasets demonstrate the efficacy and efficiency of our method, especially for the views with noise or outliers.
- Research Article
1
- 10.3390/rs16224287
- Nov 17, 2024
- Remote Sensing
- Cong-Yin Cao + 5 more
Although linear discriminant analysis (LDA)-based subspace learning has been widely applied to hyperspectral image (HSI) classification, the existing LDA-based subspace learning methods exhibit several limitations: (1) They are often sensitive to noise and demonstrate weak robustness; (2) these methods ignore the local information inherent in data; and (3) the number of extracted features is restricted by the number of classes. To address these drawbacks, this paper proposes a novel joint sparse local linear discriminant analysis (JSLLDA) method by integrating embedding regression and locality-preserving regularization into the LDA model for feature dimensionality reduction of HSIs. In JSLLDA, a row-sparse projection matrix can be learned, to uncover the joint sparse structure information of data by imposing a L2,1-norm constraint. The L2,1-norm is also employed to measure the embedding regression reconstruction error, thereby mitigating the effects of noise and occlusions. A locality preservation term is incorporated to fully leverage the local geometric structural information of the data, enhancing the discriminability of the learned projection. Furthermore, an orthogonal matrix is introduced to alleviate the limitation on the number of acquired features. Finally, extensive experiments conducted on three hyperspectral image (HSI) datasets demonstrated that the performance of JSLLDA surpassed that of some related state-of-the-art dimensionality reduction methods.
- Research Article
4
- 10.1109/tnnls.2023.3281739
- Oct 1, 2024
- IEEE transactions on neural networks and learning systems
- Wanqi Yang + 5 more
In real applications, several unpredictable or uncertain factors could result in unpaired multiview data, i.e., the observed samples between views cannot be matched. Since joint clustering among views is more effective than individual clustering in each view, we investigate unpaired multiview clustering (UMC), which is a valuable but insufficiently studied problem. Due to lack of matched samples between views, we could fail to build the connection between views. Therefore, we aim to learn the latent subspace shared by views. However, existing multiview subspace learning methods usually rely on the matched samples between views. To address this issue, we propose an iterative multiview subspace learning strategy [iterative unpaired multiview clustering (IUMC)], aiming to learn a complete and consistent subspace representation among views for UMC. Moreover, based on IUMC, we design two effective UMC methods: 1) Iterative unpaired multiview clustering via covariance matrix alignment (IUMC-CA) that further aligns the covariance matrix of subspace representations and then performs clustering on the subspace and 2) iterative unpaired multiview clustering via one-stage clustering assignments (IUMC-CY) that performs one-stage multiview clustering (MVC) by replacing the subspace representations with clustering assignments. Extensive experiments show the excellent performance of our methods for UMC, compared with the state-of-the-art methods. Also, the clustering performance of observed samples in each view can be considerably improved by those observed samples from the other views. In addition, our methods have good applicability in incomplete MVC.
- Research Article
1
- 10.3390/rs16163081
- Aug 21, 2024
- Remote Sensing
- Chen-Feng Long + 5 more
Low-rank representation (LRR) is widely utilized in image feature extraction, as it can reveal the underlying correlation structure of data. However, the subspace learning methods based on LRR suffer from the problems of lacking robustness and discriminability. To address these issues, this paper proposes a new robust feature extraction method named the weighted Schatten p-norm minimization via low-rank discriminative embedding regression (WSNM-LRDER) method. This method works by integrating weighted Schatten p-norm and linear embedding regression into the LRR model. In WSNM-LRDER, the weighted Schatten p-norm is adopted to relax the low-rank function, which can discover the underlying structural information of the image, to enhance the robustness of projection learning. In order to improve the discriminability of the learned projection, an embedding regression regularization is constructed to make full use of prior information. The experimental results on three hyperspectral images datasets show that the proposed WSNM-LRDER achieves better performance than some advanced feature extraction methods. In particular, the proposed method yielded increases of more than 1.2%, 1.1%, and 2% in the overall accuracy (OA) for the Kennedy Space Center, Salinas, and Houston datasets, respectively, when comparing with the comparative methods.
- Research Article
- 10.1093/comjnl/bxae049
- Jun 10, 2024
- The Computer Journal
- Zhuojie Huang + 3 more
Abstract Many subspace learning methods based on low-rank representation employ the nearest neighborhood graph to preserve the local structure. However, in these methods, the nearest neighborhood graph is a binary matrix, which fails to precisely capture the similarity between distinct samples. Additionally, these methods need to manually select an appropriate number of neighbors, and they cannot adaptively update the similarity graph during projection learning. To tackle these issues, we introduce Discriminative Subspace Learning with Adaptive Graph Regularization (DSL_AGR), an innovative unsupervised subspace learning method that integrates low-rank representation, adaptive graph learning and nonnegative representation into a framework. DSL_AGR introduces a low-rank constraint to capture the global structure of the data and extract more discriminative information. Furthermore, a novel graph regularization term in DSL_AGR is guided by nonnegative representations to enhance the capability of capturing the local structure. Since closed-form solutions for the proposed method are not easily obtained, we devise an iterative optimization algorithm for its resolution. We also analyze the computational complexity and convergence of DSL_AGR. Extensive experiments on real-world datasets demonstrate that the proposed method achieves competitive performance compared with other state-of-the-art methods.
- Research Article
6
- 10.1016/j.eswa.2024.123831
- Mar 26, 2024
- Expert Systems with Applications
- Wenyi Feng + 5 more
Discriminative sparse subspace learning with manifold regularization
- Research Article
2
- 10.1088/1361-6501/ad3294
- Mar 19, 2024
- Measurement Science and Technology
- Fuchao Yu + 3 more
With the development of industrial intelligence, data-driven fault diagnosis plays an important role in prognostics and health management. However, there is usually a large amount of unlabeled data from different working conditions, making cross-domain fault diagnosis unstable and inflexible. To deal with this issue, we propose two novel transfer subspace learning methods based on the low-rank sparse representation (LRSR), called LRSR-G and LRSR-R. Specifically, LRSR-G integrates an additional matrix with LRSR to characterize the Gaussian noise for robustness, as well as capture global and local structures. Furthermore, LRSR-R adaptively learns the label matrix from samples instead of using the binary labeling matrix in LRSR-G, thus providing the possibility to improve the flexibility. In addition, we develop two efficient algorithms using the alternating direction method of multipliers to solve the proposed LRSR-G and LRSR-R. Extensive experiments are conducted on the Case Western Reserve University dataset and Jiangnan University (JNU) dataset. The results show that the proposed LRSR-G and LRSR-R perform better than the existing methods, while LRSR-R has more potential in cross-domain fault diagnosis tasks.
- Research Article
30
- 10.1109/tnnls.2022.3194896
- Mar 1, 2024
- IEEE Transactions on Neural Networks and Learning Systems
- Jintang Bian + 4 more
Principal component analysis (PCA) is one of the most successful unsupervised subspace learning methods and has been used in many practical applications. To deal with the outliers in real-world data, robust principal analysis models based on various measure are proposed. However, conventional PCA models can only transform features to unknown subspace for dimensionality reduction and cannot perform features' selection task. In this article, we propose a novel robust PCA (RPCA) model to mitigate the impact of outliers and conduct feature selection, simultaneously. First, we adopt σ -norm as reconstruction error (RE), which plays an important role in robust reconstruction. Second, to conduct feature selection task, we apply l2,0 -norm constraint to subspace projection. Furthermore, an efficient iterative optimization algorithm is proposed to solve the objective function with nonconvex and nonsmooth constraint. Extensive experiments conducted on several real-world datasets demonstrate the effectiveness and superiority of the proposed feature selection model.
- Research Article
2
- 10.1109/tcyb.2022.3206064
- Mar 1, 2024
- IEEE transactions on cybernetics
- Jinfu Ren + 2 more
Subspace learning (SL) plays a key role in various learning tasks, especially those with a huge feature space. When processing multiple high-dimensional learning tasks simultaneously, it is of great importance to make use of the subspace extracted from some tasks to help learn others, so that the learning performance of all tasks can be enhanced together. To achieve this goal, it is crucial to answer the following question: How can the commonality among different learning tasks and, of equal importance, the individuality of each single learning task, be characterized and extracted from the given datasets, so as to benefit the subsequent learning, for example, classification? Existing multitask SL methods usually focused on the commonality among the given tasks, while neglecting the individuality of the learning tasks. In order to offer a more general and comprehensive framework for multitask SL, in this article, we propose a novel method dubbed commonality and individuality-based SL (CISL). First, we formally define the notions and objective functions of both commonality and individuality with respect to multiple SL tasks. Then, we design an iterative algorithm to solve the formulated objective functions, with the convergence of the algorithm being guaranteed. To show the generality of the proposed method, we theoretically analyze its connections to existing single-task and multitask SL methods. Finally, we demonstrate the necessity and effectiveness of incorporating both commonality and individuality by interpreting the learned subspaces and comparing the performance of CISL (in terms of the subsequent classification accuracy) with that of classical and state-of-the-art SL approaches on both synthetic and real-world multitask datasets. The empirical evaluation validates the effectiveness of the proposed method in characterizing the commonality and individuality for multitask SL.
- Research Article
- 10.1016/j.measurement.2023.114039
- Dec 15, 2023
- Measurement
- Shuzhi Su + 4 more
Elastic subspace diagnosis via graph-balanced discriminant projection
- Research Article
1
- 10.1016/j.asoc.2023.111096
- Nov 27, 2023
- Applied Soft Computing
- Fangzheng Huang + 3 more
Cyclic style generative adversarial network for near infrared and visible light face recognition
- Research Article
9
- 10.1049/cvi2.12228
- Aug 8, 2023
- IET Computer Vision
- Tao Zhang + 5 more
Abstract Although low‐rank representation (LRR)‐based subspace learning has been widely applied for feature extraction in computer vision, how to enhance the discriminability of the low‐dimensional features extracted by LRR based subspace learning methods is still a problem that needs to be further investigated. Therefore, this paper proposes a novel low‐rank preserving embedding regression (LRPER) method by integrating LRR, linear regression, and projection learning into a unified framework. In LRPER, LRR can reveal the underlying structure information to strengthen the robustness of projection learning. The robust metric L 2,1 ‐norm is employed to measure the low‐rank reconstruction error and regression loss for moulding the noise and occlusions. An embedding regression is proposed to make full use of the prior information for improving the discriminability of the learned projection. In addition, an alternative iteration algorithm is designed to optimise the proposed model, and the computational complexity of the optimisation algorithm is briefly analysed. The convergence of the optimisation algorithm is theoretically and numerically studied. At last, extensive experiments on four types of image datasets are carried out to demonstrate the effectiveness of LRPER, and the experimental results demonstrate that LRPER performs better than some state‐of‐the‐art feature extraction methods.
- Research Article
17
- 10.1016/j.patcog.2023.109869
- Aug 5, 2023
- Pattern Recognition
- Lei Zhou + 6 more
Zero-shot learning (ZSL) aims to recognize unseen categories without corresponding training samples, which is a practical yet challenging task in computer vision and pattern recognition community. Current state-of-the-art locality-based ZSL methods aim to learn the explicit locality of discriminative attributes, which may suffer from insufficient class-level attribute supervision. In this paper, we introduce an Attribute Subspace learning method for ZSL (AS-ZSL) to learn implicit attribute composition, which is more general than attribute localization with only class-level attribute supervision. AS-ZSL exploits subspace representations that can effectively capture the intrinsic composition of high-dimensional image features and the diversity within attribute appearance. Furthermore, we develop a subspace distance based triplet loss to improve the distinguishability of the attribute subspace representation. Attribute subspace learning module is only needed for the training phase to jointly learn discriminative global features. This leads to a compact inference phase. Furthermore, the proposed AS-ZSL can be naturally extended to adapt to the transductive ZSL setting using a novel self-supervised training strategy. Extensive experimental results on several widely used ZSL datasets, i.e., CUB, AwA2, and SUN, demonstrate the advantage of AS-ZSL compared with the state-of-the-art under different ZSL settings.
- Research Article
7
- 10.1109/tcsvt.2022.3224003
- May 1, 2023
- IEEE Transactions on Circuits and Systems for Video Technology
- Shuai Shao + 5 more
Decoupled Few-shot learning (FSL) is an effective methodology that deals with the problem of data-scarce. Its standard paradigm includes two phases: (1) Pre-train. Generating a CNN-based feature extraction model (FEM) via base data. (2) Meta-test. Employing the frozen FEM to obtain the novel data features, then classifying them. Obviously, one crucial factor, the category gap, prevents the development of FSL, i.e., it is challenging for the pre-trained FEM to adapt to the novel class flawlessly. Inspired by a common-sense theory: the FEMs based on different strategies focus on different priorities, we attempt to address this problem from the multi-view feature collaboration (MVFC) perspective. Specifically, we first denoise the multi-view features by subspace learning method, then design three attention blocks (loss-attention block, self-attention block and graph-attention block) to balance the representation between different views. The proposed method is evaluated on four benchmark datasets and achieves significant improvements of 0.9%-5.6% compared with SOTAs.
- Research Article
10
- 10.1145/3587034
- Apr 13, 2023
- ACM Transactions on Intelligent Systems and Technology
- Qilun Luo + 3 more
Multi-view clustering aims to capture the multiple views inherent information by identifying the data clustering that reflects distinct features of datasets. Since there is a consensus in literature that different views of a dataset share a common latent structure, most existing multi-view subspace learning methods rely on the nuclear norm to seek the low-rank representation of the underlying subspace. However, the nuclear norm often fails to distinguish the variance of features for each cluster due to its convex nature and data tends to fall in multiple non-linear subspaces for multi-dimensional datasets. To address these problems, we propose a new and novel multi-view clustering method (HL-L21-TLD-MSC) that unifies the Hyper-Laplacian (HL) and exclusive ℓ 2,1 (L21) regularization with the Tensor Log-Determinant Rank Minimization (TLD) setting. Specifically, the hyper-Laplacian regularization maintains the local geometrical structure that makes the estimation prune to nonlinearities, and the mixed ℓ 2,1 and ℓ 1,2 regularization provides the joint sparsity within-cluster as well as the exclusive sparsity between-cluster. Furthermore, a log-determinant function is used as a tighter tensor rank approximation to discriminate the dimension of features. An efficient alternating algorithm is then derived to optimize the proposed model, and the construction of a convergent sequence to the Karush-Kuhn-Tucker (KKT) critical point solution is mathematically validated in detail. Extensive experiments are conducted on ten well-known datasets to demonstrate that the proposed approach outperforms the existing state-of-the-art approaches with various scenarios, in which, six of them achieve perfect results under our framework developed in this article, demonstrating highl effectiveness for the proposed approach.
- Research Article
5
- 10.1109/tnnls.2021.3105813
- Apr 1, 2023
- IEEE Transactions on Neural Networks and Learning Systems
- Weizhong Yu + 4 more
Graph-based subspace learning has been widely used in various applications as the rapid growth of data dimension, while the graph is constructed by affinity matrix of input data. However, it is difficult for these subspace learning methods to preserve the intrinsic local structure of data with the high-dimensional noise. To address this problem, we proposed a novel unsupervised dimensionality reduction approach named unsupervised subspace learning with flexible neighboring (USFN). We learn a similarity graph by adaptive probabilistic neighborhood learning process to preserve the manifold structure of high-dimensional data. In addition, we utilize the flexible neighboring to learn projection and latent representation of manifold structure of high-dimensional data to remove the impact of noise. The adaptive similarity graph and latent representation are jointly learned by integrating adaptive probabilistic neighborhood learning and manifold residue term into a unified objection function. The experimental results on synthetic and real-world datasets demonstrate the performance of the proposed unsupervised subspace learning USFN method.
- Research Article
9
- 10.1016/j.sigpro.2023.108976
- Feb 18, 2023
- Signal Processing
- Wuli Wang + 5 more
Subspace prototype learning for few-Shot remote sensing scene classification
- Research Article
4
- 10.1016/j.ins.2023.02.036
- Feb 13, 2023
- Information Sciences
- Wei Chang + 3 more
Calibrated multi-task subspace learning via binary group structure constraint
- Research Article
29
- 10.1109/tii.2022.3195171
- Feb 1, 2023
- IEEE Transactions on Industrial Informatics
- Pengwen Xiong + 4 more
In order to help robots understand and perceive an object's properties during noncontact robot-object interaction, this article proposes a deeply supervised subspace learning method. In contrast to previous work, it takes the advantages of low noise and fast response of noncontact sensors and extracts novel contactless feature information to retrieve cross-modal information, so as to estimate and infer material properties of known as well as unknown objects. Specifically, a depth-supervised subspace cross-modal material retrieval model is trained to learn a common low-dimensional feature representation to capture the clustering structure among different modal features of the same class of objects. Meanwhile, all of unknown objects are accurately perceived by an energy-based model, which forces an unlabeled novel object's features to be mapped beyond the common low-dimensional features. The experimental results show that our approach is effective in comparison with other advanced methods.