Efficient high-resolution microscopic ghost imaging via sequenced speckle illumination and deep learning from a single noisy image

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

This study presents a novel approach for achieving high-quality and large-scale microscopic ghost imaging by integrating deep learning-based denoising with computational ghost imaging techniques. By utilizing sequenced random speckle patterns of optimized sizes, we reconstructed large noisy images with fewer patterns while successfully resolving fine details as small as 2.2 μm on a USAF resolution target. To enhance image quality, we incorporated the Deep Neural Network-based Noise2Void (N2V) model, which effectively denoises ghost images without requiring a reference image or a large dataset. By applying the N2V model to a single noisy ghost image, we achieved significant noise reduction, leading to high-resolution and high-quality reconstructions with low computational resources. This method resulted in an average Structural Similarity Index (SSIM) improvement of over 324% and a resolution enhancement exceeding 33% across various target images. The proposed approach proves highly effective in enhancing the clarity and structural integrity of even very low-quality ghost images, paving the way for more efficient and practical implementations of ghost imaging in microscopic applications.

Similar Papers
  • Research Article
  • Cite Count Icon 38
  • 10.1088/1361-6560/ac30a0
Noise2Void: unsupervised denoising of PET images
  • Nov 1, 2021
  • Physics in Medicine & Biology
  • Tzu-An Song + 2 more

Objective: Elevated noise levels in positron emission tomography (PET) images lower image quality and quantitative accuracy and are a confounding factor for clinical interpretation. The objective of this paper is to develop a PET image denoising technique based on unsupervised deep learning. Significance: Recent advances in deep learning have ushered in a wide array of novel denoising techniques, several of which have been successfully adapted for PET image reconstruction and post-processing. The bulk of the deep learning research so far has focused on supervised learning schemes, which, for the image denoising problem, require paired noisy and noiseless/low-noise images. This requirement tends to limit the utility of these methods for medical applications as paired training datasets are not always available. Furthermore, to achieve the best-case performance of these methods, it is essential that the datasets for training and subsequent real-world application have consistent image characteristics (e.g. noise, resolution, etc), which is rarely the case for clinical data. To circumvent these challenges, it is critical to develop unsupervised techniques that obviate the need for paired training data. Approach: In this paper, we have adapted Noise2Void, a technique that relies on corrupt images alone for model training, for PET image denoising and assessed its performance using PET neuroimaging data. Noise2Void is an unsupervised approach that uses a blind-spot network design. It requires only a single noisy image as its input, and, therefore, is well-suited for clinical settings. During the training phase, a single noisy PET image serves as both the input and the target. Here we present a modified version of Noise2Void based on a transfer learning paradigm that involves group-level pretraining followed by individual fine-tuning. Furthermore, we investigate the impact of incorporating an anatomical image as a second input to the network. Main Results: We validated our denoising technique using simulation data based on the BrainWeb digital phantom. We show that Noise2Void with pretraining and/or anatomical guidance leads to higher peak signal-to-noise ratios than traditional denoising schemes such as Gaussian filtering, anatomically guided non-local means filtering, and block-matching and 4D filtering. We used the Noise2Noise denoising technique as an additional benchmark. For clinical validation, we applied this method to human brain imaging datasets. The clinical findings were consistent with the simulation results confirming the translational value of Noise2Void as a denoising tool.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 3
  • 10.1371/journal.pcbi.1012192
Zero-shot denoising of microscopy images recorded at high-resolution limits.
  • Jun 10, 2024
  • PLoS computational biology
  • Sebastian Salwig + 2 more

Conventional and electron microscopy visualize structures in the micrometer to nanometer range, and such visualizations contribute decisively to our understanding of biological processes. Due to different factors in recording processes, microscopy images are subject to noise. Especially at their respective resolution limits, a high degree of noise can negatively effect both image interpretation by experts and further automated processing. However, the deteriorating effects of strong noise can be alleviated to a large extend by image enhancement algorithms. Because of the inherent high noise, a requirement for such algorithms is their applicability directly to noisy images or, in the extreme case, to just a single noisy image without a priori noise level information (referred to as blind zero-shot setting). This work investigates blind zero-shot algorithms for microscopy image denoising. The strategies for denoising applied by the investigated approaches include: filtering methods, recent feed-forward neural networks which were amended to be trainable on noisy images, and recent probabilistic generative models. As datasets we consider transmission electron microscopy images including images of SARS-CoV-2 viruses and fluorescence microscopy images. A natural goal of denoising algorithms is to simultaneously reduce noise while preserving the original image features, e.g., the sharpness of structures. However, in practice, a tradeoff between both aspects often has to be found. Our performance evaluations, therefore, focus not only on noise removal but set noise removal in relation to a metric which is instructive about sharpness. For all considered approaches, we numerically investigate their performance, report their denoising/sharpness tradeoff on different images, and discuss future developments. We observe that, depending on the data, the different algorithms can provide significant advantages or disadvantages in terms of their noise removal vs. sharpness preservation capabilities, which may be very relevant for different virological applications, e.g., virological analysis or image segmentation.

  • Research Article
  • Cite Count Icon 31
  • 10.1109/tgrs.2022.3217289
NS2NS: Self-Learning for Seismic Image Denoising
  • Jan 1, 2022
  • IEEE Transactions on Geoscience and Remote Sensing
  • Naihao Liu + 6 more

Attenuation of incoherent noise is an effective way to improve signal-to-noise ratio (SNR) of seismic data. Recently, supervised deep learning based methods have been widely utilized for seismic image denoising, which often need plenty of noise-free data as training labels. However, noise-free seismic data are often unavailable in field applications. We propose an unsupervised learning method (NS2NS) to train a denoising network by using single noisy seismic data. The proposed model is based on two basic truths of seismic data: (1) High self-similarity of seismic data; (2) Spatially independence of incoherent noise in seismic data. To implement the proposed method, we first build a sampling workflow to generate paired noisy images based on single noisy seismic image. Moreover, we create similar noisy images that are similar but different with the original noisy image by using the proposed self-similar sampler. The original noisy images and generated similar noisy images are then fused by using a suggested Bernoulli sampler to create new paired noisy images. These new paired noisy images are used as the input and target of the denoising model, respectively. Next, an end-to-end convolutional neural network (CNN) is built for seismic image denoising, which aims to learn features of valid signals and suppress unpredictable random noise. Finally, we apply the proposed NS2NS method to both synthetic and field data. The results show that our proposed method can effectively suppress incoherent noise while preserving valid signals.

  • Conference Article
  • Cite Count Icon 269
  • 10.1109/cvpr46437.2021.01454
Neighbor2Neighbor: Self-Supervised Denoising from Single Noisy Images
  • Jun 1, 2021
  • Tao Huang + 4 more

In the last few years, image denoising has benefited a lot from the fast development of neural networks. However, the requirement of large amounts of noisy-clean image pairs for supervision limits the wide use of these models. Although there have been a few attempts in training an image denoising model with only single noisy images, existing self-supervised denoising approaches suffer from inefficient network training, loss of useful information, or dependence on noise modeling. In this paper, we present a very simple yet effective method named Neighbor2Neighbor to train an effective image denoising model with only noisy images. Firstly, a random neighbor sub-sampler is proposed for the generation of training image pairs. In detail, input and target used to train a network are images sub-sampled from the same noisy image, satisfying the requirement that paired pixels of paired images are neighbors and have very similar appearance with each other. Secondly, a denoising network is trained on sub-sampled training pairs generated in the first stage, with a proposed regularizer as additional loss for better performance. The proposed Neighbor2Neighbor framework is able to enjoy the progress of state-of-the-art supervised denoising networks in network architecture design. Moreover, it avoids heavy dependence on the assumption of the noise distribution. We explain our approach from a theoretical perspective and further validate it through extensive experiments, including synthetic experiments with different noise distributions in sRGB space and real-world experiments on a denoising benchmark dataset in raw-RGB space.

  • Conference Article
  • Cite Count Icon 37
  • 10.1109/cvpr46437.2021.00571
FBI-Denoiser: Fast Blind Image Denoiser for Poisson-Gaussian Noise
  • Jun 1, 2021
  • Jaeseok Byun + 2 more

We consider the challenging blind denoising problem for Poisson-Gaussian noise, in which no additional information about clean images or noise level parameters is available. Particularly, when only "single" noisy images are available for training a denoiser, the denoising performance of existing methods was not satisfactory. Recently, the blind pixelwise affine image denoiser (BP-AIDE) was proposed and significantly improved the performance in the above setting, to the extent that it is competitive with denoisers which utilized additional information. However, BP-AIDE seriously suffered from slow inference time due to the inefficiency of noise level estimation procedure and that of the blind-spot network (BSN) architecture it used. To that end, we propose Fast Blind Image Denoiser (FBI-Denoiser) for Poisson-Gaussian noise, which consists of two neural network models; 1) PGE-Net that estimates Poisson-Gaussian noise parameters 2000 times faster than the conventional methods and 2) FBI-Net that realizes a much more efficient BSN for pixelwise affine denoiser in terms of the number of parameters and inference speed. Consequently, we show that our FBI-Denoiser blindly trained solely based on single noisy images can achieve the state-of-the-art performance on several real-world noisy image benchmark datasets with much faster inference time (× 10), compared to BP-AIDE.

  • Conference Article
  • Cite Count Icon 161
  • 10.1109/icip.2012.6466947
Noise level estimation using weak textured patches of a single noisy image
  • Sep 1, 2012
  • Xinhao Liu + 2 more

A patch-based noise level estimation algorithm is proposed in this paper, with patches generated from a single noisy image. One can easily estimate the noise level from image patches using principal component analysis (PCA) if the image comprises only weak textured patches. The challenge for patch-based noise level estimation is how to select weak textured patches from a noisy image. As described in this paper, we propose a novel algorithm to select weak textured patches from a single noisy image based on the gradients of the patches and their statistics. Then we estimate the noise level from the selected weak textured patches using PCA. We demonstrate experimentally that the proposed noise level estimation algorithm outperforms the state-of-the-art algorithm.

  • Conference Article
  • Cite Count Icon 5
  • 10.1109/nss/mic42677.2020.9507875
Noise2Void Denoising of PET Images
  • Oct 31, 2020
  • Tzu-An Song + 1 more

Qualitative and quantitative interpretation of PET images is often a challenging task due to high levels of noise in the images. While deep learning architectures based on convolutional neural networks have produced unprecedented accuracy at denoising PET images, most existing approaches require large training datasets with corrupt and clean image pairs, which are often unavailable for many clinical applications. The Noise2Noise technique obviates the need for clean target images but instead introduces the requirement for two noise realizations for each corrupt input. In this paper, we present a denoising technique for PET based on the Noise2Void paradigm, which requires only a single noisy image for training thus ensuring wider applicability and adoptability. During the training phase, a single noisy PET image serves as both the input and the target. The method was validated on simulation data based on the BrainWeb digital phantom. Our results show that it generates comparable performance at the training and validation stages for varying noise levels. Furthermore, its performance remains robust even when the validation inputs have different count levels than the training inputs.

  • Conference Article
  • Cite Count Icon 4
  • 10.1109/iccv48922.2021.00284
Self-Supervised Image Prior Learning with GMM from a Single Noisy Image
  • Oct 1, 2021
  • Haosen Liu + 3 more

The lack of clean images undermines the practicability of supervised image prior learning methods, of which the training schemes require a large number of clean images. To free image prior learning from the image collection burden, a novel Self-Supervised learning method for Gaussian Mixture Model (SS-GMM) is proposed in this paper. It can simultaneously achieve the noise level estimation and the image prior learning directly from only a single noisy image. This work is derived from our study on eigenvalues of the GMM’s covariance matrix. Through statistical experiments and theoretical analysis, we conclude that (1) covariance eigenvalues for clean images hold the sparsity; and that (2) those for noisy images contain sufficient information for noise estimation. The first conclusion inspires us to impose a sparsity constraint on covariance eigenvalues during the learning process to suppress the influence of noise. The second conclusion leads to a self-contained noise estimation module of high accuracy in our proposed method. This module serves to estimate the noise level and automatically determine the specific level of the sparsity constraint. Our final derived method requires only minor modifications to the standard expectation-maximization algorithm. This makes it easy to implement. Very interestingly, the GMM learned via our proposed self-supervised learning method can even achieve better image denoising performance than its supervised counterpart, i.e., the EPLL. Also, it is on par with the state-of-the-art self-supervised deep learning method, i.e., the Self2Self. Code is available at https://github.com/HUST-Tan/SS-GMM.

  • Conference Article
  • Cite Count Icon 13
  • 10.1109/icip.2014.7025542
Signal dependent noise removal from a single image
  • Oct 1, 2014
  • Xinhao Liu + 2 more

State-of-the-art image denoising algorithms usually assume additive white Gaussian noise (AWGN), although they have achieved outstanding performance, modeling and removing real signal dependent noise from a single image still remains a challenging problem. In this paper we propose a segmentation-based image denoising algorithm for signal dependent noise. Incorporating a noise identification algorithm, we integrate these two modules into a full blind, end-to-end denoising algorithm for signal dependent noise. First, we identify the noise level function for a given single noisy image. Then, after initial denoising, segmentation is applied to the pre-filtered image. Assuming the noise level of each segment is constant, we apply AWGN denoising algorithm to each segment. We obtain a final de-noised image by composing the denoised segments. Various experimental results on synthetic and real noisy images show that our algorithm outperforms state-of-the-art denoising algorithms in removing real signal dependent noise.

  • Book Chapter
  • Cite Count Icon 1
  • 10.1007/978-3-030-01177-2_95
Single Image Based Random-Value Impulse Noise Level Estimation Algorithm
  • Nov 2, 2018
  • Long Bao + 2 more

Image denoising is a vital and indispensable pre-process for most applied image processing systems. Having prior knowledge about the noise level is essential for optimizing denoising algorithms. However, this information most likely does not exist for real applications and is much harder to extract from a single noisy image than from multiple noisy images. For Gaussian noise, there are many accurate state-of-the-art level estimations, whereas there are only limited random valued impulse noise level estimations proposed. Moreover, the existing proposed impulse noise estimators are limited in accuracy, especially in the presence of high noise levels. This paper presents a new random-valued impulse noise level estimation (RVI-E) algorithm using only a single image. The presented RVI-E algorithm is based on distribution property of impulse noise pixels, on correlation among the image, and on a new linear relationship between the percentage of big-distorted noise and one of all noise. The mathematical study, computer simulations, and analysis on 347 different images using five online grayscale image databases shows that (a) the presented method is efficient, robust and reliable, (b) the presented methods show stably accurate performance across images with different contents and different levels of noise (lower than 60%), and (c) the speed performance of the proposed RVI can be boosted by the parallel computing strategy, since the estimation utilizes a parallel framework.

  • Research Article
  • Cite Count Icon 1
  • 10.14257/ijsip.2014.7.5.24
Image Restoration Based on L1 + L1 Model
  • Oct 31, 2014
  • International Journal of Signal Processing, Image Processing and Pattern Recognition
  • Ruihua Liu

In this paper, we firstly propose a new image restoration model including non-smooth -norm regularization term based on the bilateral total variation regularization. Secondly, we prove the existence of minimal solutions of our proposed energy functional model. Thirdly, we consider the convergence of the discrete numerical algorithm, and obtain that the limit point of the solution sequence is the minimal point of our proposed energy functional. Finally, we give some experimental simulation results in the case of the single noisy image without blurring, multiple different noisy images without blurring, single noisy image with blurring, and multiple different noisy images with different blurring, respectively. The restoration results show our model works effectively.

  • Research Article
  • Cite Count Icon 6
  • 10.1016/j.compbiomed.2023.107308
M-Denoiser: Unsupervised image denoising for real-world optical and electron microscopy data
  • Jul 29, 2023
  • Computers in Biology and Medicine
  • Xiaoya Chong + 4 more

M-Denoiser: Unsupervised image denoising for real-world optical and electron microscopy data

  • Research Article
  • Cite Count Icon 1
  • 10.1111/cgf.14680
Learning Multi‐Scale Deep Image Prior for High‐Quality Unsupervised Image Denoising
  • Oct 1, 2022
  • Computer Graphics Forum
  • Hao Jiang + 4 more

Recent methods on image denoising have achieved remarkable progress, benefiting mostly from supervised learning on massive noisy/clean image pairs and unsupervised learning on external noisy images. However, due to the domain gap between the training and testing images, these methods typically have limited applicability on unseen images. Although several attempts have been made to avoid the domain gap issue by learning denoising from singe noisy image itself, they are less effective in handling real‐world noise because of assuming the noise corruptions are independent and zero mean. In this paper, we go step further beyond prior work by presenting a novel unsupervised image denoising framework trained from single noisy image without making any explicit assumptions on the noise statistics. Our approach is built upon the deep image prior (DIP), which enables diverse image restoration tasks. However, as is, the denoising performance of DIP will significantly deteriorate on nonzero‐mean noise and is sensitive to the number of iterations. To overcome this problem, we propose to utilize multi‐scale deep image prior by imposing DIP across different image scales under the constraint of a scale consistency. Experiments on synthetic and real datasets demonstrate that our method performs favorably against the state‐of‐the‐art methods for image denoising.

  • Research Article
  • 10.1609/aaai.v39i5.32575
Zero-Shot Noise2Mean: Gap Minimization for Efficient Denoising from a Single Noisy Image
  • Apr 11, 2025
  • Proceedings of the AAAI Conference on Artificial Intelligence
  • Duo Liu + 4 more

Acquiring pairwise noisy-clean training data is challenging. Consequently, some self-supervised denoising methods utilize noisy image pairs as both input and target for network training. However, a major issue with these methods is the gap between the clean images of the input and target. In this paper, we achieve high-quality image denoising by reducing or even eliminating this gap. Our method requires no training data or prior knowledge of the noise distribution. It consists of two lightweight networks that can be trained using only a single noisy test image. Specifically, we propose a random mask-based downsampler that generates multiple pairs of downsampled noisy images, which are similar but distinct. These image pairs serve as the input for the first network, with the mean image of each pair used as the target. This initially reduces the gap between the clean images of the input and target. Particularly, in our method, the clean counterpart of the first network's target (i.e., the mean image) can be obtained. We then train a second network using the mean image as input and its clean counterpart as the target. This effectively eliminates the gap and achieves better denoising results. Extensive experiments demonstrate that our method outperforms in both denoising performance and efficiency.

  • Research Article
  • Cite Count Icon 1
  • 10.1364/josab.546265
Global ghost imaging
  • Jun 10, 2025
  • Journal of the Optical Society of America B
  • Nixi Zhao + 6 more

Ghost imaging, as an imaging technique, holds great potential for standard imaging. However, the inability to achieve a large field of view, high resolution, and high-quality image reconstruction in a short time with a small number of measurements seriously hinders the practical application of ghost imaging. Parallel ghost imaging treats each pixel of the position-sensitive detector as a bucket detector and simultaneously executes tens of thousands of ghost imaging processes in parallel. This enables nonlocal imaging with high resolution, an extra-large field of view, and low dosage. In this work, we propose a dedicated imaging method for parallel ghost imaging within the framework of the bucket detector array, namely global ghost imaging. Global ghost imaging introduces global prior knowledge, enabling parallel ghost imaging not to be calculated independently within each local system but to have the global prior cover all subsystems. The speckle patterns of each ghost imaging subsystem are uploaded to the terminal for unified iterative computation. This transforms the iterative sparse solutions of each subsystem from local optima to a global optimum. Simulations and experiments demonstrate that global ghost imaging achieves a large field of view and high-resolution imaging, completely eliminates the discontinuities between subsystems, significantly improves image quality, exhibits strong noise robustness, and, more crucially, enables image reconstruction with an extremely low number of samples. By using the classical ghost imaging framework and the computational ghost imaging framework, respectively, we showcase the ability of this method to reconstruct a complex sample with an image size of 800×280 pixels using only eight measurements.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon