DBFE-Net: A dual-branch feature extraction network for Gaussian blind denoising

  • Abstract
  • Literature Map
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

DBFE-Net: A dual-branch feature extraction network for Gaussian blind denoising

ReferencesShowing 10 of 49 papers
  • Cite Count Icon 2
  • 10.1109/iros58592.2024.10802399
PathFormer: A Transformer-Based Framework for Vision-Centric Autonomous Navigation in Off-Road Environments
  • Oct 14, 2024
  • Bilal Hassan + 5 more

  • Cite Count Icon 6269
  • 10.1109/iccv.2001.937655
A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics
  • Jul 7, 2001
  • D Martin + 3 more

  • Cite Count Icon 21
  • 10.1109/tip.2013.2280188
Bayesian Learning of Sparse Multiscale Image Representations
  • Dec 1, 2013
  • IEEE Transactions on Image Processing
  • James Michael Hughes + 2 more

  • Open Access Icon
  • Cite Count Icon 6037
  • 10.1109/cvprw.2017.151
Enhanced Deep Residual Networks for Single Image Super-Resolution
  • Jul 1, 2017
  • Bee Lim + 4 more

  • Cite Count Icon 2
  • 10.1016/j.patcog.2024.110563
Bidirectional image denoising with blurred image feature
  • May 6, 2024
  • Pattern Recognition
  • Linwei Fan + 5 more

  • Open Access Icon
  • Cite Count Icon 954
  • 10.1109/cvpr.2019.00181
Toward Convolutional Blind Denoising of Real Photographs
  • Jun 1, 2019
  • Shi Guo + 4 more

  • Open Access Icon
  • Cite Count Icon 560
  • 10.1109/iccv.2019.00325
Real Image Denoising With Feature Attention
  • Oct 1, 2019
  • Saeed Anwar + 1 more

  • Open Access Icon
  • Cite Count Icon 2356
  • 10.1109/tip.2018.2839891
FFDNet: Toward a Fast and Flexible Solution for CNN based Image Denoising.
  • May 25, 2018
  • IEEE Transactions on Image Processing
  • Kai Zhang + 2 more

  • Open Access Icon
  • Cite Count Icon 172
  • 10.1007/978-3-030-58607-2_3
Dual Adversarial Network: Toward Real-World Noise Removal and Noise Generation
  • Jan 1, 2020
  • Zongsheng Yue + 3 more

  • Cite Count Icon 698
  • 10.1109/cvpr.2018.00182
A High-Quality Denoising Dataset for Smartphone Cameras
  • Jun 1, 2018
  • Abdelrahman Abdelhamed + 2 more

Similar Papers
  • Research Article
  • Cite Count Icon 54
  • 10.1109/tmi.2021.3101616
Artifact and Detail Attention Generative Adversarial Networks for Low-Dose CT Denoising.
  • Dec 1, 2021
  • IEEE Transactions on Medical Imaging
  • Xiong Zhang + 5 more

Generative adversarial networks are being extensively studied for low-dose computed tomography denoising. However, due to the similar distribution of noise, artifacts, and high-frequency components of useful tissue images, it is difficult for existing generative adversarial network-based denoising networks to effectively separate the artifacts and noise in the low-dose computed tomography images. In addition, aggressive denoising may damage the edge and structural information of the computed tomography image and make the denoised image too smooth. To solve these problems, we propose a novel denoising network called artifact and detail attention generative adversarial network. First, a multi-channel generator is proposed. Based on the main feature extraction channel, an artifacts and noise attention channel and an edge feature attention channel are added to improve the denoising network's ability to pay attention to the noise and artifacts features and edge features of the image. Additionally, a new structure called multi-scale Res2Net discriminator is proposed, and the receptive field in the module is expanded by extracting the multi-scale features in the same scale of the image to improve the discriminative ability of discriminator. The loss functions are specially designed for each sub-channel of the denoising network corresponding to its function. Through the cooperation of multiple loss functions, the convergence speed, stability, and denoising effect of the network are accelerated, improved, and guaranteed, respectively. Experimental results show that the proposed denoising network can preserve the important information of the low-dose computed tomography image and achieve better denoising effect when compared to the state-of-the-art algorithms.

  • Research Article
  • 10.1145/3728297
Joint Denoising and Upscaling via Multi-branch and Multi-scale Feature Network
  • May 22, 2025
  • Proceedings of the ACM on Computer Graphics and Interactive Techniques
  • Pawel Kazmierczyk + 6 more

Deep learning-based denoising and upscaling techniques have emerged to enhance framerates for real-time rendering. A single neural network for joint denoising and upscaling offers the advantage of sharing parameters in the feature space, enabling efficient prediction of filter weights for both. However, it is still ongoing research to devise an efficient feature extraction neural network that uses different characteristics in inputs for the two combined problems. We propose a multi-branch, multi-scale feature extraction network for joint neural denoising and upscaling. The proposed multi-branch U-Net architecture is lightweight and effectively accounts for different characteristics in noisy color and noise-free aliased auxiliary buffers. Our technique produces superior quality denoising in a target resolution (4K), given noisy 1spp Monte Carlo renderings and auxiliary buffers in a low resolution (1080p), compared to the state-of-the-art methods.

  • Research Article
  • Cite Count Icon 14
  • 10.1007/s11517-022-02731-9
An application of deep dual convolutional neural network for enhanced medical image denoising.
  • Jan 14, 2023
  • Medical & Biological Engineering & Computing
  • Alpana Sahu + 2 more

This work investigates the medical image denoising (MID) application of the dual denoising network (DudeNet) model for chest X-ray (CXR). The DudeNet model comprises four components: a feature extraction block with a sparse mechanism, an enhancement block, a compression block, and a reconstruction block. The developed model uses residual learning to boost denoising performance and batch normalization to accelerate the training process. The name proposed for this model is dual convolutional medical image-enhanced denoising network (DCMIEDNet). The peak signal-to-noise ratio (PSNR) and structure similarity index measurement (SSIM) are used to assess the MID performance for five different additive white Gaussian noise (AWGN) levels of σ = 15, 25, 40, 50, and 60 in CXR images. Presented investigations revealed that the PSNR and SSIM offered by DCMIEDNet are better than several popular state-of-the-art models such as block matching and 3D filtering, denoising convolutional neural network, and feature-guided denoising convolutional neural network. In addition, it is also superior to the recently reported MID models like deep convolutional neural network with residual learning, real-valued medical image denoising network, and complex-valued medical image denoising network. Therefore, based on the presented experiments, it is concluded that applying the DudeNet methodology for DCMIEDNet promises to be quite helpful for physicians.

  • Research Article
  • 10.1002/mp.70023
An exploratory study on ultrasound image denoising using feature extraction and adversarial diffusion model.
  • Oct 1, 2025
  • Medical physics
  • Yue Hu + 3 more

In ultrasound imaging, the generated images involve speckle noise owing to the mechanism underlying image generation. Speckle noise directly affects image analysis, necessitating its effectivesuppression. Ultrasound image denoising offers limited performance and causes structural information loss. To address these challenges and improve ultrasound image quality, we develop a new denoising method based on the diffusion model (DM). This exploratory study proposes a DM-based denoising method, namely adversarial DM with feature extraction network (ADM-ExNet) to investigate the potential of combining diffusion models and generative adversarial Networks (GANs) for ultrasound image denoising. Specifically, we replace the reverse process of the DM with a GAN and modify the generator and discriminator as a U-Net structure. Simultaneously, a structural feature extraction network is incorporated into the model to construct a loss function, which offers enhanced detail retention. The noise levels ( ) were simulated by adding Gaussian noise to the original ultrasound images, where controls the intensity of the noise. We employed three public datasets, HC18, CAMUS, and Ultrasound Nerve, which involve the ultrasound images of the fetal head circumference, heart, and nerves, respectively. Each image was adjusted to pixels, and the training set and the validation set were divided by 9:1. The mean square error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) were employed as primary evaluation metrics. To rigorously validate the statistical significance of performance differences, we further applied false discovery rate (FDR) correction for hypothesis testing and calculated Cohen's d effect sizes to quantify the magnitude of improvements against baselines. ADM-ExNet was compared with three traditional filtering methods and four deep learning methods with the U-Netstructure. The proposed ADM-ExNet significantly enhances denoising performance across all datasets, with PSNR improvements exceeding 12 dB over noisy baselines and MSE reductions of over 90%. Notably, ADM-ExNet achieves high SSIM values (e.g., 0.941 at on HC18 vs. 0.369 for noisy images), demonstrating superior structural preservation without excessive smoothing. Statistical significance (FDR-adjusted ) and Cohen's d effect sizes (up to d = 3.8 on CAMUS at ) confirm its robustness, outperforming traditional methods and deep learning competitors in both visual quality and quantitative metrics (PSNR, SSIM) across noise levels. This balance of detail retention and noise suppression highlights the exploratory potential ofADM-ExNet. The proposed method improves the quality of ultrasound images with various structural features, effectively reducing noise while retainingdetails.

  • Conference Article
  • Cite Count Icon 5
  • 10.1109/icme.2019.00051
Residual Dilated Network with Attention for Image Blind Denoising
  • Jul 1, 2019
  • Guanqun Hou + 2 more

Image denoising has recently witnessed substantial progress. However, many existing methods remain suboptimal for texture restoration due to treating different image regions and channels indiscriminately. Also they need to specify the noise level in advance, which largely hinders their use in blind denoising. Therefore, we introduce both attention mechanism and automatic noise level estimation into image denoising. Specifically, we propose a new, effective end-toend attention-embedded neural network for image denoising, named as Residual Dilated Attention Network (RDAN). Our RDAN is composed of a series of tailored Residual Dilated Attention Blocks (RDAB) and Residual Conv Attention Blocks (RCAB). The RDAB and RCAB incorporates both non-local and local operations, which enable a comprehensive capture of structural information. In addition, we incorporate a Gaussian-based noise level estimation into RDAN to accomplish blind denoising. Experimental results have demonstrated that our RDAN can substantially outperforms the state-of-the-art denoising methods as well as promisingly preserve image texture.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 21
  • 10.3390/electronics10030319
A Multi-Scale Feature Extraction-Based Normalized Attention Neural Network for Image Denoising
  • Jan 29, 2021
  • Electronics
  • Yi Wang + 3 more

Due to the rapid development of deep learning and artificial intelligence techniques, denoising via neural networks has drawn great attention due to their flexibility and excellent performances. However, for most convolutional network denoising methods, the convolution kernel is only one layer deep, and features of distinct scales are neglected. Moreover, in the convolution operation, all channels are treated equally; the relationships of channels are not considered. In this paper, we propose a multi-scale feature extraction-based normalized attention neural network (MFENANN) for image denoising. In MFENANN, we define a multi-scale feature extraction block to extract and combine features at distinct scales of the noisy image. In addition, we propose a normalized attention network (NAN) to learn the relationships between channels, which smooths the optimization landscape and speeds up the convergence process for training an attention model. Moreover, we introduce the NAN to convolutional network denoising, in which each channel gets gain; channels can play different roles in the subsequent convolution. To testify the effectiveness of the proposed MFENANN, we used both grayscale and color image sets whose noise levels ranged from 0 to 75 to do the experiments. The experimental results show that compared with some state-of-the-art denoising methods, the restored images of MFENANN have larger peak signal-to-noise ratios (PSNR) and structural similarity index measure (SSIM) values and get better overall appearance.

  • Conference Article
  • Cite Count Icon 27
  • 10.1109/tencon.2019.8929277
Convolutional Neural Networks for Noise Classification and Denoising of Images
  • Oct 1, 2019
  • Dibakar Sil + 2 more

The goal of this paper is to find whether a convolutional neural network (CNN) performs better than the existing blind algorithms for image denoising, and, if yes, whether the noise statistics has an effect on the performance gap. For automatic identification of noise distribution, we used two different convolutional neural networks, VGG-16 and Inception-v3, and it was found that Inception-v3 identifies the noise distribution more accurately over a set of nine possible distributions, namely, Gaussian, log-normal, uniform, exponential, Poisson, salt and pepper, Rayleigh, speckle and Erlang. Next, for each of these noisy image sets, we compared the performance of FFDNet, a CNN based denoising method, with noise clinic, a blind denoising algorithm. It was found that CNN based denoising outperforms blind denoising in general, with an average improvement of 16% in peak signal to noise ratio (PSNR). The improvement is however very prominent for salt and pepper type noise with a PSNR difference of 72%, whereas for noise distributions such as Gaussian, FFDNet could achieve only a 2% improvement over noise clinic. The results indicate that for developing a CNN based optimum denoising platform, consideration of noise distribution is necessary.

  • Research Article
  • Cite Count Icon 8
  • 10.1016/j.patcog.2023.109810
Variational Bayesian deep network for blind Poisson denoising
  • Jul 6, 2023
  • Pattern Recognition
  • Hao Liang + 4 more

Variational Bayesian deep network for blind Poisson denoising

  • Research Article
  • Cite Count Icon 4
  • 10.1504/ijbet.2015.070032
A new real-time resource-efficient algorithm for ECG denoising, feature extraction and classification-based wearable sensor network
  • Jan 1, 2015
  • International Journal of Biomedical Engineering and Technology
  • Ali Fadel Marhoon + 1 more

Long-term patient monitoring is an important issue especially for the elderly. This can be done using a wearable wireless sensor network. These sensors have limited resources in terms of computation, storage memory, size and mainly in power. In this work, a real-time resource-efficient algorithm has been implemented and tested practically such that not all the Ephy (ECG) data are transmitted to the server for later processing. The algorithm reads a sample window and processes it on the sensor node using an adaptive filter with a differentiator and then a fast and simple algorithm for feature extraction of the ECG signal to find P, Q, R, S and T waves. Finally, a classifier algorithm has been designed to distinguish between normal and abnormal ECG signals. The work has been implemented using Shimmer sensor nodes and uses the open source TinyOS 2.1.2 and Python 2.7.

  • Research Article
  • Cite Count Icon 2
  • 10.1016/j.aiig.2023.02.002
Research on microseismic denoising method based on CBDNet
  • Feb 17, 2023
  • Artificial Intelligence in Geosciences
  • Jianchao Lin + 3 more

Research on microseismic denoising method based on CBDNet

  • Research Article
  • Cite Count Icon 4
  • 10.1016/j.image.2023.117009
A coarse-to-fine multi-scale feature hybrid low-dose CT denoising network
  • Jul 13, 2023
  • Signal Processing: Image Communication
  • Zefang Han + 4 more

A coarse-to-fine multi-scale feature hybrid low-dose CT denoising network

  • Research Article
  • Cite Count Icon 26
  • 10.1609/aaai.v37i2.25317
Adaptive Dynamic Filtering Network for Image Denoising
  • Jun 26, 2023
  • Proceedings of the AAAI Conference on Artificial Intelligence
  • Hao Shen + 2 more

In image denoising networks, feature scaling is widely used to enlarge the receptive field size and reduce computational costs. This practice, however, also leads to the loss of high-frequency information and fails to consider within-scale characteristics. Recently, dynamic convolution has exhibited powerful capabilities in processing high-frequency information (e.g., edges, corners, textures), but previous works lack sufficient spatial contextual information in filter generation. To alleviate these issues, we propose to employ dynamic convolution to improve the learning of high-frequency and multi-scale features. Specifically, we design a spatially enhanced kernel generation (SEKG) module to improve dynamic convolution, enabling the learning of spatial context information with a very low computational complexity. Based on the SEKG module, we propose a dynamic convolution block (DCB) and a multi-scale dynamic convolution block (MDCB). The former enhances the high-frequency information via dynamic convolution and preserves low-frequency information via skip connections. The latter utilizes shared adaptive dynamic kernels and the idea of dilated convolution to achieve efficient multi-scale feature extraction. The proposed multi-dimension feature integration (MFI) mechanism further fuses the multi-scale features, providing precise and contextually enriched feature representations. Finally, we build an efficient denoising network with the proposed DCB and MDCB, named ADFNet. It achieves better performance with low computational complexity on real-world and synthetic Gaussian noisy datasets. The source code is available at https://github.com/it-hao/ADFNet.

  • Research Article
  • 10.1002/mp.17430
Noise-assisted hybrid attention networks for low-dose PET and CT denoising.
  • Oct 21, 2024
  • Medical physics
  • Hengzhi Xue + 2 more

Positron emission tomography (PET) and computed tomography (CT) play a vital role in tumor-related medical diagnosis, assessment, and treatment planning. However, full-dose PET and CT pose the risk of excessive radiation exposure to patients, whereas low-dose images compromise image quality, impacting subsequent tumor recognition and diseasediagnosis. To solve such problems, we propose a Noise-Assisted Hybrid Attention Network (NAHANet) to reconstruct full-dose PET and CT images from low-dose PET (LDPET) and CT (LDCT) images to reduce patient radiation risks while ensuring the performance of subsequent tumor recognition. NAHANet contains two branches: the noise feature prediction branch (NFPB) and the cascaded reconstruction branch. Among them, NFPB providing noise features for the cascade reconstruction branch. The cascaded reconstruction branch comprises a shallow feature extraction module and a reconstruction module which contains a series of cascaded noise feature fusion blocks (NFFBs). Among these, the NFFB fuses the features extracted from low-dose images with the noise features obtained by NFPB to improve the feature extraction capability. To validate the effectiveness of the NAHANet method, we performed experiments using two public available datasets: the Ultra-low Dose PET Imaging Challenge dataset and Low Dose CT Grand Challengedataset. As a result, the proposed NAHANet achieved higher performance on common indicators. For example, on the CT dataset, the PSNR and SSIM indicators were improved by 4.1dB and 0.06 respectively, and the rMSE indicator was reduced by 5.46 compared with the LDCT; on the PET dataset, the PSNR and SSIM was improved by 3.37dB and 0.02, and the rMSE was reduced by 9.04 compared with theLDPET. This paper proposes a transformer-based denoising algorithm, which utilizes hybrid attention to extract high-level features of low dose images and fuses noise features to optimize the denoising performance of the network, achieving good performance improvements on low-dose CT and PETdatasets.

  • Research Article
  • Cite Count Icon 2
  • 10.3233/xst-230020
Dual-domain fusion deep convolutional neural network for low-dose CT denoising.
  • Jul 13, 2023
  • Journal of X-Ray Science and Technology
  • Zhiyuan Li + 5 more

In view of the underlying health risks posed by X-ray radiation, the main goal of the present research is to achieve high-quality CT images at the same time as reducing x-ray radiation. In recent years, convolutional neural network (CNN) has shown excellent performance in removing low-dose CT noise. However, previous work mainly focused on deepening and feature extraction work on CNN without considering fusion of features from frequency domain and image domain. To address this issue, we propose to develop and test a new LDCT image denoising method based on a dual-domain fusion deep convolutional neural network (DFCNN). This method deals with two domains, namely, the DCT domain and the image domain. In the DCT domain, we design a new residual CBAM network to enhance the internal and external relations of different channels while reducing noise to promote richer image structure information. For the image domain, we propose a top-down multi-scale codec network as a denoising network to obtain more acceptable edges and textures while obtaining multi-scale information. Then, the feature images of the two domains are fused by a combination network. The proposed method was validated on the Mayo dataset and the Piglet dataset. The denoising algorithm is optimal in both subjective and objective evaluation indexes as compared to other state-of-the-art methods reported in previous studies. The study results demonstrate that by applying the new fusion model denoising, denoising results in both image domain and DCT domain are better than other models developed using features extracted in the single image domain.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 4
  • 10.32604/jihpp.2020.010453
Image Denoising with GAN Based Model
  • Jan 1, 2020
  • Journal of Information Hiding and Privacy Protection
  • Peizhu Gong + 2 more

Image denoising is often used as a preprocessing step in computer vision tasks, which can help improve the accuracy of image processing models. Due to the imperfection of imaging systems, transmission media and recording equipment, digital images are often contaminated with various noises during their formation, which troubles the visual effects and even hinders people’s normal recognition. The pollution of noise directly affects the processing of image edge detection, feature extraction, pattern recognition, etc., making it difficult for people to break through the bottleneck by modifying the model. Many traditional filtering methods have shown poor performance since they do not have optimal expression and adaptation for specific images. Meanwhile, deep learning technology opens up new possibilities for image denoising. In this paper, we propose a novel neural network which is based on generative adversarial networks for image denoising. Inspired by U-net, our method employs a novel symmetrical encoder-decoder based generator network. The encoder adopts convolutional neural networks to extract features, while the decoder outputs the noise in the images by deconvolutional neural networks. Specially, shortcuts are added between designated layers, which can preserve image texture details and prevent gradient explosions. Besides, in order to improve the training stability of the model, we add Wasserstein distance in loss function as an optimization. We use the peak signal-to-noise ratio (PSNR) to evaluate our model and we can prove the effectiveness of it with experimental results. When compared to the state-of-the-art approaches, our method presents competitive performance.

More from: Displays
  • New
  • Research Article
  • 10.1016/j.displa.2025.103268
Pushing the boundaries of immersion and storytelling: A technical review of Unreal Engine
  • Nov 1, 2025
  • Displays
  • Oleksandra Sobchyshak + 2 more

  • New
  • Research Article
  • 10.1016/j.displa.2025.103274
Class-Weighted Prompting for rehearsal-free class-incremental learning
  • Nov 1, 2025
  • Displays
  • Hong Ma + 4 more

  • New
  • Research Article
  • 10.1016/j.displa.2025.103276
MS-BFIRNet: Fine-grained Background Injection and Foreground Reconstruction with multi-supervision for few-shot segmentation
  • Nov 1, 2025
  • Displays
  • Lan Guo + 7 more

  • New
  • Research Article
  • 10.1016/j.displa.2025.103277
Evaluating touchscreen interface design for information Analysis: A comparative study of juxtaposition and overlay display modes
  • Nov 1, 2025
  • Displays
  • Kuang-Jou Chen + 6 more

  • New
  • Research Article
  • 10.1016/j.displa.2025.103278
Towards camouflaged object detection via global guidance and cascading refinement
  • Nov 1, 2025
  • Displays
  • Dan Wu + 2 more

  • Research Article
  • 10.1016/j.displa.2025.103255
AIBench: Towards trustworthy evaluation under the 45°law
  • Oct 1, 2025
  • Displays
  • Zicheng Zhang + 20 more

  • Research Article
  • 10.1016/j.displa.2025.103225
Review of deep learning-based segmentation methods: Popular approaches, literature gaps, and opportunities
  • Oct 1, 2025
  • Displays
  • Muhammed Celik + 1 more

  • Research Article
  • 10.1016/j.displa.2025.103253
The effect of surrounding avatars’ speed and body composition on users’ physical activity and exertion perception in VR GYM
  • Oct 1, 2025
  • Displays
  • Bingcheng Ke + 2 more

  • Research Article
  • 10.1016/j.displa.2025.103252
PConv-UNet: Multi-scale pinwheel convolutions for breast ultrasound tumor segmentation
  • Oct 1, 2025
  • Displays
  • Chen Wang + 6 more

  • Research Article
  • 10.1016/j.displa.2025.103260
Lightweight deep learning with Multi-Scale feature fusion for High-Precision and Low-Latency eye tracking
  • Oct 1, 2025
  • Displays
  • Liwan Lin + 4 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon