Published in last 50 years
Related Topics
Articles published on Subspace Learning
- New
- Research Article
- 10.62762/tis.2025.224024
- Nov 5, 2025
- ICCK Transactions on Intelligent Systematics
- Muhammad Osama + 2 more
Hyperspectral imaging (HSI) has become a powerful remote sensing and material analysis tool because it can capture detailed spectral information in hundreds of adjacent bands. Nevertheless, the high dimensionality and redundancy in HSI data make precise and efficient classification challenging. This paper presents an extensive comparative study of both traditional and state-of-the-art Machine Learning algorithms for HSI classification. Classical classifiers like Support Vector Machines (SVM) and K-Nearest Neighbors (KNN) are compared with state-of-the-art methods like Collaborative and Sparse Representation-based approaches, Convolutional Recurrent Neural Networks (CRNN), Classification and Regression Trees (CRT), and Local Fisher Discriminant Analysis with Class Mean Modeling (LFDA-CMM). The specific emphasis is given to the Nearest Regularized Subspace (NRS) classifier family, which utilizes different distance measures—i.e., Spectral Angle Mapper, Manhattan, Euclidean, Cosine, and Chi-square distances—to achieve improved classification accuracy. Experimental results on two benchmark datasets, Indian Pines and University of Pavia, show that the proposed NRS-MD method consistently achieves better performance in accuracy, Kappa coefficient, and computational complexity. These results emphasize the capability of subspace models with regularization to meet the challenges of hyperspectral image classification and present valuable information for choosing appropriate methods in practical applications.
- New
- Research Article
- 10.1109/tcyb.2025.3597576
- Nov 1, 2025
- IEEE transactions on cybernetics
- Junwei Sun + 4 more
Current biological behavior models only take the external environment information as the basis for decision-making, ignoring the internal emotional state information. A memristor-based cerebellar model articulation controller (CMAC) neural network circuit of artificial fish behavioral decision is designed, and fuzzy emotion is taken into account. The designed circuit is mainly composed of voltage selection modules, fuzzy processing modules, synaptic neuron modules, eigen quantity modules and feedback modules. CMAC neural network is used as learning criteria and the learning subspace voltage with emotional generalization properties outputs to synaptic neural module. By utilizing the nonvolatility and thresholding properties of the memristor, the weights in the neural network are changed to enable the artificial fish to perform primary and secondary learning under specific emotional voltages. The feasibility of the above circuit is verified by PSpice simulation software. The artificial life and biological intelligence behavior are integrated by the memristor-based CMAC neural network circuit. It provides a reliable theory and basis for the emotional behavior of bionic robots.
- Research Article
- 10.3390/rs17193348
- Oct 1, 2025
- Remote Sensing
- Yinhu Wu + 2 more
Hyperspectral imaging (HSI) systems often suffer from complex noise degradation during the imaging process, significantly impacting downstream applications. Deep learning-based methods, though effective, rely on impractical paired training data, while traditional model-based methods require manually tuned hyperparameters and lack generalization. To address these issues, we propose SS3L (Self-Supervised Spectral-Spatial Subspace Learning), a novel HSI denoising framework that requires neither paired data nor manual tuning. Specifically, we introduce a self-supervised spectral–spatial paradigm that learns noisy features from noisy data, rather than paired training data, based on spatial geometric symmetry and spectral local consistency constraints. To avoid manual hyperparameter tuning, we propose an adaptive rank subspace representation and a loss function designed based on the collaborative integration of spectral and spatial losses via noise-aware spectral-spatial weighting, guided by the estimated noise intensity. These components jointly enable a dynamic trade-off between detail preservation and noise reduction under varying noise levels. The proposed SS3L embeds noise-adaptive subspace representations into the dynamic spectral–spatial hybrid loss-constrained network, enabling cross-sensor denoising through prior-informed self-supervision. Experimental results demonstrate that SS3L effectively removes noise while preserving both structural fidelity and spectral accuracy under diverse noise conditions.
- Research Article
- 10.1016/j.media.2025.103628
- Oct 1, 2025
- Medical image analysis
- Sarah Müller + 3 more
Disentangling representations of retinal images with generative models.
- Research Article
- 10.1016/j.dsp.2025.105248
- Sep 1, 2025
- Digital Signal Processing
- Ziping Ma + 3 more
Sparse dual-graph regularized quadratic dimensionality reduction algorithm based on subspace learning
- Research Article
- 10.1109/tbme.2025.3541643
- Aug 1, 2025
- IEEE transactions on bio-medical engineering
- Ziwen Ke + 9 more
To enable fast and stable neonatal brain MR imaging by integrating learned neonate-specific subspace model and model-driven deep learning. Fast data acquisition is critical for neonatal brain MRI, and deep learning has emerged as an effective tool to accelerate existing fast MRI methods by leveraging prior image information. However, deep learning often requires large amounts of training data to ensure stable image reconstruction, which is not currently available for neonatal MRI applications. In this work, we addressed this problem by utilizing a subspace model-assisted deep learning approach. Specifically, we used a subspace model to capture the spatial features of neonatal brain images. The learned neonate-specific subspace was then integrated with a deep network to reconstruct high-quality neonatal brain images from very sparse k-space data. The effectiveness and robustness of the proposed method were validated using both the dHCP dataset and testing data from four independent medical centers, yielding very encouraging results. The stability of the proposed method has been confirmed with different perturbations, all showing remarkably stable reconstruction performance. The flexibility of the learned subspace was also shown when combined with other deep neural networks, yielding improved image reconstruction performance. Fast and stable neonatal brain MR imaging can be achieved using subspace-assisted deep learning with sparse sampling. With further development, the proposed method may improve the practical utility of MRI in neonatal imaging applications.
- Research Article
- 10.1016/j.eswa.2025.128007
- Aug 1, 2025
- Expert Systems with Applications
- Hongwei Jiang + 9 more
Cross representation subspace learning for multi-view clustering
- Research Article
- 10.1016/j.inffus.2025.103075
- Aug 1, 2025
- Information Fusion
- Sana Bellili + 7 more
Tensor-driven face recognition: Integrating super-resolution and multilinear subspace learning for low-resolution images
- Research Article
- 10.1088/1741-2552/adeec8
- Jul 28, 2025
- Journal of Neural Engineering
- Yibing Li + 6 more
Objective.Speech imagery is a nascent paradigm that is receiving widespread attention in current brain-computer interface (BCI) research. By collecting the electroencephalogram (EEG) data generated when imagining the pronunciation of a sentence or word in human mind, machine learning methods are used to decode the intention that the subject wants to express. Among existing decoding methods, graph is often used as an effective tool to model the data structure; however, in the field of BCI research, the correlations between EEG samples may not be fully characterized by simple pairwise relationships. Therefore, this paper attempts to employ a more effective data structure to model EEG data.Approach.In this paper, we introduce hypergraph to describe the high-order correlations between samples by viewing feature vectors extracted from each sample as vertices and then connecting them through hyperedges. We also dynamically update the weights of hyperedges, the weights of vertices and the structure of the hypergraph in two transformed subspaces, i.e. projected and feature-weighted subspaces. Accordingly, two dynamic hypergraph learning models, i.e. dynamic hypergraph semi-supervised learning within projected subspace (DHSLP) and dynamic hypergraph semi-supervised learning within selected feature subspace (DHSLF), are proposed for speech imagery decoding.Main results.To validate the proposed models, we performed a series of experiments on two EEG datasets. The obtained results demonstrated that both DHSLP and DHSLF have statistically significant improvements in decoding imagined speech intentions to existing studies. Specifically, DHSLP achieved accuracies of 78.40% and 66.64% on the two datasets, while DHSLF achieved accuracies of 71.07% and 63.94%.Significance.Our study indicates the effectiveness of the learned hypergraphs in characterizing the underlying semantic information of imagined contents; besides, interpretable results on quantitatively exploring the discriminative EEG channels in speech imagery decoding are obtained, which lay the foundation for further exploration of the physiological mechanisms during speech imagery.
- Research Article
- 10.1371/journal.pone.0326870
- Jul 11, 2025
- PloS one
- Li Bo + 2 more
Accurate prediction of multi-dimensional water quality indicators is critical for sustainable water resource management, yet existing methods often fail to address the high-dimensional, nonlinear, and spatially correlated nature of data from heterogeneous IoT sensors. To overcome these limitations, we propose TGMHA (Tensor Decomposition and Gated Neural Network with Multi-Head Self-Attention), a novel hybrid model that integrates three key innovations: 1) Tensor-based Feature Extraction: We combine Standard Delay Embedding Transformation (SDET) with Tucker tensor decomposition to reconstruct raw time series into low-rank tensor representations, capturing latent spatio-temporal patterns while suppressing sensor noise. 2) Multi-Head Self-Attention for Inter-Indicator Dependencies: A multi-head self-attention mechanism explicitly models complex inter-dependencies among diverse water quality indicators (e.g., pH, dissolved oxygen, conductivity) via parallel feature subspace learning. 3) Efficient Long-Term Dependency Modeling: An encoder-decoder architecture with gated recurrent units (GRUs), optimized by adaptive rank selection, ensures efficient modeling of long-term dependencies without compromising computational performance. By unifying these components into an end-to-end trainable system, TGMHA surpasses conventional approaches in handling complex water quality dynamics, particularly in scenarios with missing data and nonlinear interactions. Rigorous evaluation against six state-of-the-art benchmarks confirms TGMHA's superior capability, offering a robust and interpretable paradigm for multi-sensor fusion and water quality forecasting in environmental informatics.
- Research Article
- 10.1016/j.media.2025.103566
- Jul 1, 2025
- Medical image analysis
- Jian Guan + 2 more
MVNMF: Multiview nonnegative matrix factorization for radio-multigenomic analysis in breast cancer prognosis.
- Research Article
- 10.3390/app15137251
- Jun 27, 2025
- Applied Sciences
- Liping Chen + 1 more
Multi-view data improve the effectiveness of clustering tasks, but they often encounter complex noise and corruption. The missing view of the multi-view samples leads to serious degradation of the clustering model’s performance. Current multi-view clustering methods always try to compensate for the missing information in the original domain, which is limited by the linear representation function. Even more, their clustering structures across views are not sufficiently considered, which leads to suboptimal results. To solve these problems, a tensioned multi-view subspace clustering algorithm is proposed based on sequential kernels to integrate complementary information in multi-source heterogeneous data. By superimposing the kernel matrix based on the sequential characteristics onto the third-order tensor, the robust low-rank representation for the missing is reconstructed by the matrix calculation of sequential kernel learning. Moreover, the tensor structure helps subspace learning to mine the high-order associations between different views. Tensioned Multi-view Ordered Kernel Subspace Clustering (TMOKSC) implements the ADMM framework. Compared with current representative multi-view clustering algorithms, the proposed TMOKSC algorithm is the best in many objective measures. In general, the robust sequential kernel represents the tensor fusion potential subspace structure.
- Research Article
- 10.1007/s11081-025-09993-w
- Jun 24, 2025
- Optimization and Engineering
- Xijun Ma + 4 more
Multi-view Partially Shared Subspace Learning
- Research Article
- 10.1021/acs.jcim.5c00731
- Jun 5, 2025
- Journal of chemical information and modeling
- Zhenqiu Shu + 4 more
Single-cell RNA sequencing (scRNA-seq) has become a crucial technology for analyzing cellular diversity at the single-cell level. Cell clustering is crucial in scRNA-seq data analysis as it accurately identifies distinct cell types and uncovers potential subpopulations. However, most existing scRNA-seq methods rely on a single view for analysis, leading to an incomplete interpretation of the scRNA-seq data. Furthermore, the high dimensionality of the scRNA-seq data and the inevitable noise pose significant challenges for clustering tasks. To address these challenges, in this study, we introduce a novel clustering method, called graph attention network with subspace learning (scGANSL), for scRNA-seq data clustering. Specifically, the proposed scGANSL method first constructs two views using highly variable genes (HVGs) screening and principal component analysis (PCA). They are then individually fed into a multiview shared graph autoencoder, where clustering labels guide the learning of latent representations and the coefficient matrix. Furthermore, the proposed method integrates a zero-inflated negative binomial (ZINB) model into a self-supervised graph attention autoencoder to learn latent representations more effectively. To preserve both local and global structures of scRNA-seq data in the latent representation space, we introduce a local learning and self-expression strategy to guide model training. Experimental results across various scRNA-seq data sets demonstrate that the proposed scGANSL model significantly outperforms other state-of-the-art scRNA-seq data clustering methods.
- Research Article
- 10.1007/s40747-025-01942-5
- Jun 4, 2025
- Complex & Intelligent Systems
- Zhen Xu + 1 more
Multi-dimensional classification (MDC) aims to simultaneously train a number of multi-class classifiers for multiple heterogeneous class spaces. However, as supervised learning methods, the existing MDC algorithms require that all the training data be precisely labeled in multi-dimensional class spaces, which can be impractical in many real applications sometimes. The lack of high-quality labeled data may negatively affect their learning performance. Additionally, the existing MDC algorithms only address scenarios of centralized processing, where all training data must be centrally stored at a single fusion center. Nowadays, however, the training data are typically distributed at multiple nodes within a network, making it challenging to transmit them to a fusion center for further processing. To address these issues, in this paper, we propose a novel algorithm called distributed semi-supervised partial multi-dimensional learning (dS2PMDL), which is designed to handle distributed classification of a small proportion of partially multi-dimensional (PMD) data and a large proportion of unlabeled data across a network. In our proposed algorithm, an in-network framework of subspace learning is formulated for label recovery. By tracking the representations of non-noisy label vectors in the learned subspace, the reliable labels of training data can be recovered. Subsequently, the multi-dimensional classifier modeled by the random feature map can be adaptively trained using a two-level label dependencies exploitation strategy. The convergence performance and communication complexity of the dS2PMDL algorithm are analyzed. Furthermore, experiments on multiple datasets are performed to validate the effectiveness of the proposed algorithm in semi-supervised partial multi-dimensional classification.
- Research Article
- 10.1109/tcyb.2025.3557917
- Jun 1, 2025
- IEEE transactions on cybernetics
- Xu Chen + 4 more
Multiview Subspace Clustering (MvSC) has demonstrated impressive clustering performance on multiview data. Most existing methods rely on either raw features or reduced-redundancy data for subspace representation learning, followed by spectral clustering to derive the final results. However, these methods maintain a fixed feature space during subspace learning, which limits information propagation and compromises both representation quality and clustering performance. To address this issue, this article proposes an adaptive dictionary learning approach for MvSC (AMvSC), which seamlessly integrates redundancy reduction and representation learning within a unified framework to facilitate mutual information propagation. Specifically, an adaptive dictionary learning strategy is designed to automatically reduce redundancy and noise in the original feature space during the subspace representation learning process. This strategy ensures effective information exchange, thereby enhancing the quality of the learned representations. Additionally, low-rank constraints, combined with smoothness and diversity regularization, are applied to further refine the subspace representations and comprehensively capture complex correlations among samples. Finally, an alternating optimization algorithm is developed to iteratively update the unified learning model. Extensive experiments validate the effectiveness and superiority of the proposed method.
- Research Article
- 10.1109/jbhi.2025.3532784
- Jun 1, 2025
- IEEE journal of biomedical and health informatics
- Zile Wang + 4 more
Single-cell multi-omics sequencing technology comprehensively considers various molecular features to reveal the complexity of cells information. The clustering analysis of multi-omics data provides new insight into cellular heterogeneity. However, multi-omics data are characterized by high dimensionality, sparsity, and heterogeneity. Here, we propose an unsupervised clustering algorithm based on deep multi-view subspace learning, called scDMSC. This approach coordinates the heterogeneity of omics data through weighted reconstruction and employs deep subspace learning to identify shared latent features, elucidating the correlations among the omics. Our algorithm was rigorously tested across multiple real and simulated datasets, outperforming existing single-cell multi-omics integration methods and standard single-cell transcriptomics clustering tools in terms of both precision and scalability. Furthermore, differential expression and modality interpretability analyses in downstream applications highlight the model's capacity in uncovering biological mechanisms.
- Research Article
1
- 10.1109/jiot.2025.3530771
- Jun 1, 2025
- IEEE Internet of Things Journal
- Fengyuan Nie + 6 more
Empowering Anomaly Detection in IoT Traffic Through Multiview Subspace Learning
- Research Article
- 10.1016/j.neucom.2025.129885
- Jun 1, 2025
- Neurocomputing
- Zijian Xiao + 5 more
Joint subspace learning and subspace clustering based unsupervised feature selection
- Research Article
- 10.1109/tnnls.2025.3525766
- Jun 1, 2025
- IEEE transactions on neural networks and learning systems
- Xiaohui Wei + 3 more
Bipartite graph (BiG) has been proven to be efficient in handling massive multiview data for clustering. However, how to regulate the structural information of view-specific anchors and view-shared BiG is still open and needs to be further studied. Hence, a novel dual-structural BiG learning (DsBiGL) method is proposed in the article. It transforms BiG learning into a joint optimization problem of IntrA-view and InteR-view subspace learning (IASL and IRSL) with the structural constraints, such as k-nearest neighbor (KNN) and low-rank. On one hand, IASL uses the KNN and view-specific low-rank constraints to enhance the discriminativeness of view-specific anchors. On the other hand, IRSL uses an adaptive weighting strategy to obtain view-shared BiG directly from multiview samples, where the KNN and view-shared low-rank constraints are adopted to encode local connectivity and cluster information between samples. Note that IASL and IRSL are integrated into a unified optimization model, which ensures the interactive enhancement of view-specific anchor representation and view-shared BiG learning. Finally, an algorithm based on iterative optimization is designed to solve the proposed DsBiGL model. Experimental results on various multiview datasets have demonstrated the superiority of DsBiGL in terms of clustering results when compared with other comparative methods.