Articles published on Denoising autoencoder
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
445 Search results
Sort by Recency
- New
- Research Article
- 10.3847/1538-4357/ae0d83
- Nov 28, 2025
- The Astrophysical Journal
- Rongrong Liu + 4 more
Abstract Galaxy model subtraction removes the smooth light of nearby galaxies so that fainter sources (e.g., stars, star clusters, and background galaxies) can be identified and measured. Traditional approaches (isophotal or parametric fitting) are semiautomated and can be challenging for large datasets. We build a convolutional denoising autoencoder (DAE) for galaxy model subtraction: images are compressed to a latent representation and reconstructed to yield the smooth galaxy, suppressing other objects. The DAE is trained on GALFIT-generated model galaxies injected into real sky backgrounds and tested on real images from the Next Generation Virgo Cluster Survey. To quantify performance, we conduct an injection-recovery experiment on residual images by adding mock globular clusters (GCs) with known fluxes and positions. Our tests confirm a higher recovery rate of mock GCs near galaxy centers for complex morphologies, while matching ellipse fitting for smooth ellipticals. Overall, the DAE achieves subtraction equivalent to isophotal ellipse fitting for regular ellipticals and superior results for galaxies with high ellipticities or spiral features. Photometry of small-scale sources on DAE residuals is consistent with that on ellipse-subtracted residuals. Once trained, the DAE processes an image cutout in ≲0.1 s, enabling fast, fully automatic analysis of large datasets. We make our code available for download and use.
- New
- Research Article
- 10.1016/j.ijpharm.2025.126403
- Nov 16, 2025
- International journal of pharmaceutics
- Olima Uddin + 3 more
Machine learning recovers corrupted pharmaceutical 3D printing formulation data.
- Research Article
- 10.3390/s25216705
- Nov 2, 2025
- Sensors (Basel, Switzerland)
- Giulia Bassani + 2 more
Human Activity Recognition (HAR) is widely used for healthcare, but few works focus on Manual Material Handling (MMH) activities, despite their diffusion and impact on the workers’ health. We propose four Deep Learning algorithms for HAR in MMH: Bidirectional Long Short-Term Memory (BiLSTM), Sparse Denoising Autoencoder (Sp-DAE), Recurrent Sp-DAE, and Recurrent Convolutional Neural Network (RCNN). We explored different hyperparameter combinations to maximize the classification performance (F1-score,) using wearable sensors’ data gathered from 14 subjects. We investigated the best three-parameter combinations for each network using the full dataset to select the two best-performing networks, which were then compared using 14 datasets with increasing subject numerosity, 70–30% split, and Leave-One-Subject-Out (LOSO) validation, to evaluate whether they may perform better with a larger dataset. The benchmarking network DeepConvLSTM was tested on the full dataset. BiLSTM performs best in classification and complexity (95.7% 70–30% split; 90.3% LOSO). RCNN performed similarly (95.9%; 89.2%) with a positive trend with subject numerosity. DeepConvLSTM achieves similar classification performance (95.2%; 90.3%) but requires and more Multiply and ACcumulate (MAC) and and more Multiplication and Addition (MA) operations, which measure the complexity of the network’s inference process, than BiLSTM and RCNN, respectively. The BILSTM and RCNN perform close to DeepConvLSTM while being computationally lighter, fostering their use in embedded systems. Such lighter algorithms can be readily used in the automatic ergonomic and biomechanical risk assessment systems, enabling personalization of risk assessment and easing the adoption of safety measures in industrial practices involving MMH.
- Research Article
- 10.1016/j.medengphy.2025.104406
- Nov 1, 2025
- Medical engineering & physics
- Ahui Li + 1 more
Heart rate estimation for U-Net and LSTM models combining multiple attention mechanisms.
- Research Article
- 10.1029/2024wr039831
- Sep 30, 2025
- Water Resources Research
- Timothy K Johnsen + 6 more
Abstract Machine learning (ML) methods applied in scientific research often deal with interrelated features in high‐dimensional data. Reducing data noise and redundancy is needed to increase prediction accuracy and efficiency especially when dealing with data from field sensors. We explored an unsupervised learning method, the denoising autoencoder (DAE), to extract the underlying data structure from noisy raw data in the context of predicting hydrologic quantities from multiple field sensors. These sensors have intrinsic instrumental noise and occasional malfunctions that cause missing values. Our DAE neural network reconstructed meteorological sensor data containing noise and missing values to predict evapotranspiration in a mountainous watershed. The DAE reconstructed the sensor variables with a mean coefficient of determination value of 0.77 across 15 dimensions representing individual sensors. It reduced variance and bias uncertainties compared to a classical autoencoder model. The reconstruction quality varied across dimensions depending on their cross‐correlation and alignment with the underlying data structure. Uncertainties arising from the model structure were overall higher than those resulting from data corruption. We attached the DAE structure to a downstream ET‐prediction neural network in three formats and achieved reasonably accurate ET predictions . The use of the DAE notably reduced variance uncertainty in ET prediction. However, excessive variance reduction may be accompanied by an increase in bias due to the intrinsic bias‐variance tradeoff. Our method of evaluating and reducing uncertainties in aggregated data from different sources can be used to improve predictive models, process understanding, and uncertainty quantification for better water resource management.
- Research Article
- 10.1038/s41598-025-21416-5
- Sep 25, 2025
- Scientific Reports
- Mohammad Abboush + 2 more
In the automotive industry, a rigorous testing process based on ISO 26,262 is carried out at various stages of the V-model to ensure the quality of software systems. Conventional validation of embedded electronic control units (ECUs) using hardware-in- the-loop (HIL) testing is performed in the late stages using the big bang integration style, resulting in delayed feedback, lack of scalability, and insufficient fault diagnosis. Furthermore, test recording analysis is performed manually based on expert knowledge to identify the nature of the failure occurring. This, in turn, resulted in higher development costs and effort, delays fault detection, and hinders agile collaboration. To address these gaps, this article proposes a novel continuous integration (CI)-enabled HIL testing framework to facilitate continuous software development through iterative cycles. Furthermore, based on a representative critical faults dataset, intelligent data-driven ML-assisted Fault Detection and Diagnosis (FDD) models are developed, including LSTM and K-means for the diagnosis of known and unknown sensor-related faults as classification and clustering problems, respectively. The novel aspect of the robust models lies in the integration of a denoising autoencoder (DAE) for the extraction of representative features before the classification and clustering process, considering the noise conditions. The evaluation outcomes illustrate the superiority of the proposed model for known faults classification in comparison to other state-of-the-art methods, with an average F1-score of 91.85%. Furthermore, the integration of DAE with k-means exhibited a high clustering performance against noise with a low mean squared error (MSE) and Davies–Bouldin index (DBI), i.e., 0.044 and 0.68, respectively. It has been demonstrated that the proposed methodology facilitates more efficient, automated, and accurate fault analysis within the framework of automotive software validation workflows. Consequently, this approach enhances both safety and efficiency in comparison to conventional methodologies.Supplementary InformationThe online version contains supplementary material available at 10.1038/s41598-025-21416-5.
- Research Article
- 10.2174/0118744710384129250327060846
- Sep 1, 2025
- Current radiopharmaceuticals
- Yibin Liu + 8 more
Nasopharyngeal Carcinoma (NPC) exhibits high incidence in southern China. Despite improved survival with intensity-modulated radiotherapy (IMRT), 10%-20% of patients experience local recurrence. Traditional TNM staging fails to reflect tumor heterogeneity, necessitating robust recurrence prediction models. This study aimed to develop an MRIbased NPC recurrence prediction model by integrating radiomics, deep learning, and clinical features. A total of 184 pathologically confirmed NPC patients receiving radical radiotherapy were included. After propensity score matching (1:1), 136 cases were analyzed. Stacked denoising autoencoder (SDAE) extracted deep features from contrast-enhanced T1-weighted MRI. Radiomic features (morphology, texture, first-order statistics), clinical parameters (gender, age, TNM stage), and SDAE features were combined to construct 12 models using SVM, MLP, logistic regression (LR), and random forest (RF). Performance was evaluated via AUC, accuracy, sensitivity, and specificity, with external validation (91 cases). Model 1 (radiomics + SDAE + clinical features + SVM) achieved the highest AUC (0.89, 95% CI: 0.84-0.93), accuracy (81.5%), sensitivity (67.3%), and specificity (97.9%). External validation showed AUC 0.83, sensitivity 88.9%, and specificity 78%. The DeLong test confirmed no significant AUC difference between internal and external cohorts (P >0.05). The fusion of SDAE-enhanced features outperformed traditional radiomics. SVM demonstrated optimal performance in small samples, likely due to its high-dimensional feature handling and anti-overfitting capability. Limitations include single-center retrospective design and lack of functional imaging (DWI/PET) or molecular markers (EBV-DNA). Future multicenter prospective studies and multimodal data integration are warranted to enhance biological interpretability and clinical utility. This model provides a tool for early recurrence risk stratification and personalized therapy optimization, advancing precision medicine in NPC management.
- Research Article
- 10.1080/03772063.2025.2547999
- Aug 22, 2025
- IETE Journal of Research
- Mohit Dua + 3 more
The most of the Automatic Modulation Classifiers (AMC) frameworks have been built for single-carrier modulation signals. As the current wireless communication infrastructure uses Multi-Carrier Modulation (MCM) signals, the need for MCM classification methods has become prevalent. As MCM signals are complex, their effective classification is imperative in modern communication systems. This paper proposes a noise-robust framework that focuses on classifying MCM signals through an integrated approach, leveraging a Short Time Fourier Transform (STFT) spectrogram and novel Denoising Autoencoder (DAE) at the front end, and fine-tuned Convolution Neural Network (CNN) architecture at the back end. The work investigates five different types of MCM signals, and for each of these MCM signal types, two modulation schemes for subcarrier: Quadrature Amplitude Modulation 16 (QAM16) and Quadrature Amplitude Modulation 64 (QAM64) have been explored. This technique yields a total of ten distinct MCM signals for categorization. Experimental evaluations showcase the efficacy of the proposed approach in achieving superior classification accuracy of 98% at −20 dB and 96% at 20 dB Signal-to-Noise Ratio (SNR).
- Research Article
- 10.3390/s25175219
- Aug 22, 2025
- Sensors (Basel, Switzerland)
- Dongdong Chen + 7 more
To address the challenges of weak early-stage loosening fault signals and strong environmental noise interference in escalator drive mainframe anchor bolts, which hinder effective fault feature extraction, this paper proposes an improved Residual Convolutional Denoising Autoencoder (RCDAE) for signal denoising in high-intensity noise environments. The model combines DMS (Dynamically Multimodal Synergistic) loss function, the gated residual mechanism, and CNN–Transformer. The experimental results demonstrate that the proposed model achieves an average accuracy of 93.88% under noise intensities ranging from 10 dB to −10 dB, representing a 2.65% improvement over the baseline model without the improved RCDAE (91.23%). At the same time, in order to verify the generalization performance of the model, the CWRU bearing data set is used to conduct experiments under the same conditions. The experimental results show that the accuracy of the proposed model is 1.30% higher than that of the baseline model without improved RCDAE, validating the method’s significant advantages in noise suppression and feature representation. This study provides an effective solution for loosening fault diagnosis of escalator drive mainframe anchor bolts.
- Research Article
- 10.3390/rs17152717
- Aug 6, 2025
- Remote Sensing
- Songxi Yang + 4 more
Building change detection and building damage assessment are two essential tasks in post-disaster analysis. Building change detection focuses on identifying changed building areas between bi-temporal images, while building damage assessment involves segmenting all buildings and classifying their damage severity. These tasks play a critical role in disaster response and urban development monitoring. Although supervised learning has significantly advanced building change detection and damage assessment, its reliance on large labeled datasets remains a major limitation. In contrast, self-supervised learning enables the extraction of meaningful data representations without explicit training labels. To address this challenge, we propose a self-supervised learning approach that unifies denoising autoencoders and contrastive learning, enabling effective data representation for building change detection and damage assessment. The proposed architecture integrates a dual denoising autoencoder with a Vision Transformer backbone and contrastive learning strategy, complemented by a Feature Pyramid Network-ResNet dual decoder and an Edge Guidance Module. This design enhances multi-scale feature extraction and enables edge-aware segmentation for accurate predictions. Extensive experiments were conducted on five public datasets, including xBD, LEVIR, LEVIR+, SYSU, and WHU, to evaluate the performance and generalization capabilities of the model. The results demonstrate that the proposed Denoising AutoEncoder-enhanced Dual-Fusion Network (DAEDFN) approach achieves competitive performance compared with fully supervised methods. On the xBD dataset, the largest dataset for building damage assessment, our proposed method achieves an F1 score of 0.892 for building segmentation, outperforming state-of-the-art methods. For building damage severity classification, the model achieves an F1 score of 0.632. On the building change detection datasets, the proposed method achieves F1 scores of 0.837 (LEVIR), 0.817 (LEVIR+), 0.768 (SYSU), and 0.876 (WHU), demonstrating model generalization across diverse scenarios. Despite these promising results, challenges remain in complex urban environments, small-scale changes, and fine-grained boundary detection. These findings highlight the potential of self-supervised learning in building change detection and damage assessment tasks.
- Research Article
- 10.32628/ijsrst251361
- Aug 3, 2025
- International Journal of Scientific Research in Science and Technology
- Mr P Alagu Manoharan + 1 more
Flight delays have significant implications on airline efficiency and customer satisfaction. Existing prediction models often struggle with accuracy due to the complexity, volume, and noisiness of flight-related data. This study proposes an advanced predictive model using Deep Learning (DL), specifically a Stacked Denoising Autoencoder combined with the Levenberg-Marquardt (LM) algorithm (SDA-LM). The model leverages features such as flight time duration and previous flight delays. Comparative analysis with SAE-LM and SDA models using both balanced and imbalanced datasets shows the SDA-LM model achieves superior precision, accuracy, sensitivity, and F-measure. Experimental results on U.S. domestic airline datasets demonstrate that SDA-LM outperforms traditional methods including RNN in delay prediction.
- Research Article
- 10.3390/bioengineering12080829
- Jul 31, 2025
- Bioengineering
- Wanlin Juan + 3 more
Single-cell RNA sequencing (scRNA-seq) has revolutionized molecular biology and genomics by enabling the profiling of individual cell types, providing insights into cellular heterogeneity. Deep learning methods have become popular in single cell analysis for tasks such as dimension reduction, cell clustering, and data imputation. In this work, we introduce DropDAE, a denoising autoencoder (DAE) model enhanced with contrastive learning, to specifically address the dropout events in scRNA-seq data, where certain genes show very low or even zero expression levels due to technical limitations. DropDAE uses the architecture of a denoising autoencoder to recover the underlying data patterns while leveraging contrastive learning to enhance group separation. Our extensive evaluations across multiple simulation settings based on synthetic data and a real-world dataset demonstrate that DropDAE not only reconstructs data effectively but also further improves clustering performance, outperforming existing methods in terms of accuracy and robustness.
- Research Article
1
- 10.1038/s41598-025-06481-0
- Jul 2, 2025
- Scientific Reports
- Mustafa Al-Khafaji + 1 more
The rapid growth of medical imaging data presents challenges for efficient storage and transmission, particularly in clinical and telemedicine applications where image fidelity is crucial. This study proposes a hybrid deep learning-based image compression framework that integrates Stationary Wavelet Transform (SWT), Stacked Denoising Autoencoder (SDAE), Gray-Level Co-occurrence Matrix (GLCM), and K-means clustering. The framework enables multiresolution decomposition, texture-aware feature extraction, and adaptive region-based compression. A custom loss function that combines Mean Squared Error (MSE) and Structural Similarity Index (SSIM) ensures high perceptual quality and compression efficiency. The proposed model was evaluated across multiple benchmark medical imaging datasets and achieved a Peak Signal-to-Noise Ratio (PSNR) of up to 50.36 dB, MS-SSIM of 0.9999, and an encoding-decoding time of 0.065 s. These results demonstrate the model’s capability to outperform existing approaches while maintaining diagnostic integrity, scalability, and speed, making it suitable for real-time and resource-constrained clinical environments.
- Research Article
- 10.3389/frai.2025.1594372
- Jun 25, 2025
- Frontiers in artificial intelligence
- Ibrahim Nafisah + 8 more
Autism Spectrum Disorder (ASD) is a neurodevelopmental condition characterized by challenges in communication, social interactions, and repetitive behaviors. The heterogeneity of symptoms across individuals complicates diagnosis. Neuroimaging techniques, particularly resting-state functional MRI (rs-fMRI), have shown potential for identifying neural signatures of ASD, though challenges such as high dimensionality, noise, and small sample sizes hinder their clinical application. This study proposes a novel approach for ASD detection utilizing deep learning and advanced feature selection techniques. A hybrid model combining Stacked Sparse Denoising Autoencoder (SSDAE) and Multi-Layer Perceptron (MLP) is employed to extract relevant features from rs-fMRI data in the ABIDE I dataset, which was preprocessed using the CPAC pipeline. Feature selection is enhanced through an optimized Hiking Optimization Algorithm (HOA) that integrates DynamicOpposites Learning (DOL) and Double Attractors to improve convergence toward the optimal subset of features. The proposed model is evaluated using multiple ASD datasets. The performance metrics include an average accuracy of 0.735, sensitivity of 0.765, and specificity of 0.752, surpassing the results of existing state-of-the-art methods. The findings demonstrate the effectiveness of the hybrid deep learning approach for ASD detection. The enhanced feature selection process, coupled with the hybrid model, addresses limitations in current neuroimaging analyses and offers a promising direction for more accurate and clinically applicable ASD detection models.
- Research Article
- 10.1080/01969722.2025.2521703
- Jun 20, 2025
- Cybernetics and Systems
- Thamizharasi M + 1 more
Diagnosing Alzheimer’s disease (AD) involves a combination of clinical evaluations, a review of medical history, and a series of diagnostic tests. Alzheimer’s primarily influences memory and cognitive capabilities as a condition of progressive neurodegeneration. Early detection is critical for realizing efficient management approaches and prompt interventions to enhance the care and support provided to individuals with the disease. To identify Alzheimer’s disease, a novel Dual attention convolutional-based Gooseneck Barnacle Search (DA-GBS) algorithm is proposed. The algorithm utilizes preprocessing techniques such as image enhancement, data cleaning, and image resizing, to improve the image quality for better detection. Also, the Stacked Denoising Autoencoders (SDAE) technique is used to extract features since it learns automatically the most relevant features. Also, the integration of attention mechanisms with dual CNN is employed to classify medical images such as MRI or CT scans. Furthermore, the preprocessed images are obtained from two datasets namely the Alzheimer’s dataset and the Alzheimer’s disease Neuroimaging Initiative (ADNI) dataset. A novel Gooseneck barnacle algorithm with an initial search method is employed to boost the method’s effectiveness. Several evaluation metrics are applied for better outcomes namely accuracy, recall, F1-score, precision, False positive rate (FPR), False negative rate (FNR), and specificity. The test results represent the effectiveness of the developed DA-GBS method, attaining the superior accuracy of 98.7% in the Alzheimer’s disease dataset.
- Research Article
- 10.1007/s41060-025-00824-w
- Jun 13, 2025
- International Journal of Data Science and Analytics
- Ahmed Nabil Atwa + 2 more
Abstract This study investigates the inherent limitations of conventional dimensionality reduction techniques when applied to time-bars financial datasets. Such datasets are characterized by low correlation, pronounced heteroscedasticity, and non-Gaussian return distributions—properties that often violate the assumptions underpinning traditional methods. Our empirical findings reveal that these techniques tend to exhibit inflated generalization performance on out-of-sample tests, yet fall short in generating interpretable signals for financial machine learning applications. To rigorously examine this issue, we evaluate 14 feature extraction models on time-bars asset data, focusing on their ability to produce robust informational signals. While the Denoising Autoencoder outperforms several competing methods with respect to covariance-based metrics, further statistical analysis indicates that false discoveries may compromise its apparent efficacy. Despite strong cross-validation results, we perform a portfolio optimization backtesting using features derived from both the original and reconstructed datasets within a defined market regime. The near-identical cumulative returns observed across both strategies reinforce our central hypothesis: the marginal utility of conventional feature extraction methods in financial contexts is limited, particularly when they are deployed without addressing the structural idiosyncrasies of financial data.
- Research Article
- 10.3390/app15126523
- Jun 10, 2025
- Applied Sciences
- Jun-Gyo Jang + 3 more
This study analyzes the impact of different types of random noise applied in Denoising Autoencoder (DAE) training on fault diagnosis performance, with the aim of improving noise removal for vibration time series data. While conventional studies typically train DAEs using Gaussian random noise, such noise does not fully reflect the complex noise patterns observed in real-world industrial environments. Therefore, this study proposes a novel approach that uses high-frequency noise components extracted from actual vibration data as training noise for the DAE. Both Gaussian and high-frequency noise were used to train separate DAE models, and statistical features (mean, RMS, standard deviation, kurtosis, skewness) were extracted from the denoised signals. The fault diagnosis rates were calculated using One-Class Support Vector Machines (OC-SVM) for performance comparison. As a result, the model trained with high-frequency noise achieved a 0.0293 higher average F1-score than the Gaussian-based model. Notably, the fault detection accuracy using the kurtosis feature improved significantly from 26.22% to 99.5%. Furthermore, the proposed method outperformed the conventional denoising technique based on the Wavelet Transform, demonstrating superior noise reduction capability. These findings demonstrate that incorporating real high-frequency components from vibration data into the DAE training process is effective in enhancing both noise removal and fault diagnosis performance.
- Research Article
- 10.1007/s42417-025-01921-7
- May 31, 2025
- Journal of Vibration Engineering & Technologies
- Yajun Shang + 1 more
A Planetary Gear Fault Diagnosis Using Transfer Learning and Temporal Residual Denoising Auto-Encoders
- Research Article
- 10.59568/kjset-2025-4-1-21
- May 25, 2025
- KIU journal of science engineering and technology
- Barbara Kobuhumure + 2 more
Noise interference and inconsistent image quality pose growing issues for facial recognition systems particularly in urban surveillance settings. While traditional denoising techniques, such as wavelet-based transforms and other classical methods are good at retaining texture, they are not very effective when dealing with complicated noise patterns and high computing demands. Consequently, low-power or embedded applications have found success with lightweight improvements like Local Binary Patterns (LBP). Nonetheless, their limited capacity to interpret high-resolution and color pictures limits their wider use. The advantages and disadvantages of these traditional and contemporary methods are critically examined in this paper, with an emphasis on deep learning-based models like Stacked Denoising Autoencoders (SDAE). Although these models are prone to overfitting and necessitate careful parameter adjustment, they have demonstrated impressive effectiveness in learning noise-robust representations. The study also investigates the possibility of combining stacked autoencoders and Histogram of Oriented Gradients (HoG) as a hybrid approach to get over current bottlenecks. Based on this investigation, a robust denoising framework can be achieved by combining the denoising power of SDAEs with the edge-preserving capabilities of HoG for enhanced feature extraction under structured and mixed noise conditions. This integration is positioned as a future-ready solution for building scalable, real-time, and noise-resilient facial recognition pipelines.
- Research Article
- 10.1016/j.jphs.2025.03.005
- May 1, 2025
- Journal of pharmacological sciences
- Yamato Ishii + 3 more
Conventional wired systems for recording intestinal motility using strain-gauge transducers physically limit animal movement and are not ideal for long-term studies. Here, we developed a wireless recording system that allows continuous monitoring of intestinal activity in freely moving rats. We also developed a denoising autoencoder that isolates intestinal motility signals from locomotor noise while maintaining a 10-s temporal resolution. The refined data revealed decreased intestinal motility while the rats were behaviorally active. This system has broad applications for in vivo physiological research.