Adaptive reference constrained regularization by denoising for image restoration
High-quality reference images can be seen as a baseline for many image reconstruction techniques. The reconstruction performance gets improved as shown in the Reference Image Constrained Regularization by Denoising. But, low-quality reference images lead to disappointing results, even underperforming methods that use no reference at all. To address this limitation, a novel framework termed Adaptive Reference Constrained Regularization by Denoising is proposed in this paper. This new method mitigates the influence of low-quality reference images by adaptively leveraging the reconstruction results obtained from the existing algorithm to form a more reliable constraint. Furthermore, an enhanced version is introduced, equipped with a self-tuning parameter that progressively diminishes the weight of a poor reference throughout the optimization iterations. Extensive experiments on single-image super-resolution and MRI reconstruction demonstrate that the proposed approach significantly outperforms state-of-the-art methods, including Regularization by Denoising and Reference Image Constrained Regularization by Denoising, in both quantitative metrics and visual fidelity.
- Conference Article
- 10.1117/12.2505809
- Feb 8, 2019
Single-image super-resolution (SISR) reconstruction is important for image processing, and lots of algorithms based on deep convolutional neural network (CNN) have been proposed in recent years. Although these algorithms have better accuracy and recovery results than traditional methods without CNN, they ignore finer texture details when super-resolving at a large upscaling factor. To solve this problem, in this paper we propose an algorithm based on generative adversarial network for single-image super-resolution restoration at 4x upscaling factors. We decode a restored high-resolution image by the generative network and make the generator output results finer, more realistic texture details by the adversarial network. We performed experiments on the DIV2K dataset and proved that our method has better performance in single image super-resolution reconstruction. The image quality of this reconstruction method is improved at the peak signal-tonoise ratio and structural similarity index and the results have a good visual effect.
- Conference Article
4
- 10.1109/ipta.2016.7820962
- Jan 1, 2016
Single image super-resolution (SR) reconstruction aims to estimate a noise-free and blur-free high resolution image from a single blurred and noisy lower resolution observation. Most existing SR reconstruction methods assume that noise in the image is white Gaussian. Noise resulting from photon counting devices, as commonly used in image acquisition, is, however, better modelled with a mixed Poisson-Gaussian distribution. In this study we propose a single image SR reconstruction method based on energy minimization for images degraded by mixed Poisson-Gaussian noise. We evaluate performance of the proposed method on synthetic images, for different levels of blur and noise, and compare it with recent methods for non-Gaussian noise. Analysis shows that the appropriate treatment of signal-dependent noise, provided by our proposed method, leads to significant improvement in reconstruction performance.
- Conference Article
- 10.1145/2632856.2632857
- Jul 10, 2014
In this paper, we propose a novel single image super-resolution (SR) reconstruction framework based on artificial neural network (ANN) and Gaussian process regression (GPR). The ANN is used for SR reconstruction, and the GPR is used for correction. The new framework combines multiple reconstruction approaches including deep learning and sparse representation from a local dictionary. The main contribution is enhancing the reconstruction performance utilizing the image compressed features with respect to other state of art single image SR approaches in terms of both visual perception and quantitative assessment.
- Conference Article
2
- 10.1109/pic.2017.8359542
- Dec 1, 2017
High-resolution (HR) image reconstruction from single low-resolution (LR) image is one of the important vision applications. Despite numerous algorithms have been successfully proposed in recent years, efficient and robust single-image super-resolution (SR) reconstruction is still challenging by several factors, such as inherent ambiguous mapping between the HR-LR images, necessary huge exemplar images, and computational load. In this paper, we proposed a new learning-based method of single-image SR. Inspired by simple mapping functions method, a mapping matrix table of HR-LR feature patches is calculated in the training phase. Each atom of dictionary learned from LR feature patches is corresponding to a mapping matrix in the mapping matrix table. Combining this mapping table with sparse coding, high quality and HR images are reconstructed in reconstruction phase. The effectiveness and efficiency of this method is validated with experiments on the training datasets. Compared with state-of-art methods, jagged and blurred artifacts are depressed effectively and high reconstruction quality is acquired with less exemplar images.
- Research Article
27
- 10.1016/j.ins.2016.08.049
- Aug 16, 2016
- Information Sciences
Single image super-resolution reconstruction based on genetic algorithm and regularization prior model
- Conference Article
3
- 10.1109/icip.2013.6738131
- Sep 1, 2013
This paper presents a novel method for single-image superresolution (SR) reconstruction using the low-rank matrix recovery and nonlinear mappings. First, the low-rank matrix recovery is utilized to learn the underlying structures of subspaces spanned by the grouped patch features. Second, the low-rank components of low-resolution (LR) and high-resolution (HR) patch features are mapped onto high-dimensional spaces by nonlinear mappings respectively. Then the mapped high-dimensional vectors are projected onto a unified space, where the two manifolds constructed by LR and HR patches respectively have similar local geometry and the SR reconstruction is performed via neighboring embedding. The experimental results validate the effectiveness of our method and suggest that the proposed method outperforms other SR algorithms qualitatively and quantitatively.
- Conference Article
23
- 10.1109/icip.2017.8296862
- Sep 1, 2017
In recent years, single-image super-resolution (SR) reconstruction has aroused wide attention. Massive SR enhancement algorithms have been proposed. However, much less work has been down on the perceptual evaluation of SR enhanced images and the corresponding enhancement algorithms. In this work, we create a Super-resolution Reconstructed Image Database (SRID), which consists of images produced by two interpolation methods and six popular SR image enhancement algorithms at different amplification factors. Then, subjective experiment is conducted to collect the subjective scores by using the single-stimulus method. The performances of the SR image enhancement algorithms are then evaluated by the obtained subjective scores. Finally, the performances of the general-purpose no-reference (NR) image quality metrics are investigated on the SRID database. This study shows that it is difficult for the state-of-the-art NR image quality metrics to predict the quality of SR enhanced images.
- Research Article
- 10.1049/ipr2.12878
- Jul 19, 2023
- IET Image Processing
Deep learning can be used to achieve single‐image superresolution (SR) reconstruction. To address problems encountered during this process, such as the number of network parameters, high training requirements on equipment performance, and inability to downsample certain SR images accurately, an image SR reconstruction algorithm based on deep residual network optimization is proposed. The model introduces wavelet transforms based on the original U‐Net, where the U‐Net is trained to obtain SR wavelet feature images at multiple scales simultaneously. This approach reduces the mapping space for the network to learn low‐ to high‐resolution image mapping, which in turn reduces the training difficulty of the model. In terms of network details, the inverse wavelet transform is used in image upsampling to enhance the sparsity of the reconstruction layer in the original network. The network structure of the U‐Net upsampling is adjusted slightly to enable the network to distinguish wavelet images from feature images, thereby improving the richness of the features extracted by the model. The experimental results show that the peak signal‐to‐noise ratio (PSNR) of the fourfold SR model is 32.35 and 28.68 on the Set5 and Set14 validation sets, respectively. Compared with networks that use wavelet prediction mechanisms, such as the deep wavelet prediction SR (DWSR) and deep wavelet prediction‐based residual SR (DWRSR) models, the PSNR for all the tested public datasets is improved by 0.5. The method yields superior results in terms of both visual effect and PSNR, demonstrating the feasibility of the wavelet prediction mechanism in SR reconstruction and thus offering application value and research significance.
- Conference Article
- 10.1117/12.2540282
- Aug 14, 2019
Single image super-resolution(SR) reconstruction aims to recover the corresponding high resolution(HR) image through one low resolution(LR) image. SR reconstruction is an ill-posed problem, therefore, an effective image prior knowledge is meaningful to reconstruct the missing details in the LR image. In this paper, we propose a SR method by making use of the directional properties of image edges to construct local smoothing prior and non-local similarity prior. We utilize the directionlet that can effectively represent the image edge direction information to extract the directional feature information, after that, these directional information is used in the reconstruction framework based on TV and NLM to better protect the sharp edges of the image and improve the reliability of self-similar weight. The experimental results demonstrate that the proposed algorithm outperforms some of the current SR methods in terms of quantitatively and qualitatively.
- Book Chapter
1
- 10.1007/978-981-15-1864-5_18
- Jan 1, 2020
High Dynamic Range (HDR) imaging has been deployed in many multimedia devices recently. Various tone mapping operators (TMOs) have been developed which transform HDR radiance to the display devices. The objective of this study is to develop reference images for subjective evaluation of TMOs and full reference image quality metrics. For this purpose two psychophysical experiments were conducted. In the first experiment, high quality reference images were obtained containing the right features of colourfulness, sharpness and contrast. The reference images were used to evaluate the TMOs in subjective and objective assessments. In the second experiment, five popular TMOs were evaluated subjectively. It was found that Reinhard’s photographic tone reproduction based local TMO performed best among the five TMOs. Three full reference image quality metrics SSIM, CIELAB (2:1) and S-CIELAB were used to evaluate structural similarity and colour image quality of those TMOs. Their results showed that the three metrics agreed well with the visual results.
- Research Article
97
- 10.1186/s12938-015-0064-y
- Jul 28, 2015
- BioMedical Engineering OnLine
BackgroundIntensity normalization is an important preprocessing step in brain magnetic resonance image (MRI) analysis. During MR image acquisition, different scanners or parameters would be used for scanning different subjects or the same subject at a different time, which may result in large intensity variations. This intensity variation will greatly undermine the performance of subsequent MRI processing and population analysis, such as image registration, segmentation, and tissue volume measurement.MethodsIn this work, we proposed a new histogram normalization method to reduce the intensity variation between MRIs obtained from different acquisitions. In our experiment, we scanned each subject twice on two different scanners using different imaging parameters. With noise estimation, the image with lower noise level was determined and treated as the high-quality reference image. Then the histogram of the low-quality image was normalized to the histogram of the high-quality image. The normalization algorithm includes two main steps: (1) intensity scaling (IS), where, for the high-quality reference image, the intensities of the image are first rescaled to a range between the low intensity region (LIR) value and the high intensity region (HIR) value; and (2) histogram normalization (HN),where the histogram of low-quality image as input image is stretched to match the histogram of the reference image, so that the intensity range in the normalized image will also lie between LIR and HIR.ResultsWe performed three sets of experiments to evaluate the proposed method, i.e., image registration, segmentation, and tissue volume measurement, and compared this with the existing intensity normalization method. It is then possible to validate that our histogram normalization framework can achieve better results in all the experiments. It is also demonstrated that the brain template with normalization preprocessing is of higher quality than the template with no normalization processing.ConclusionsWe have proposed a histogram-based MRI intensity normalization method. The method can normalize scans which were acquired on different MRI units. We have validated that the method can greatly improve the image analysis performance. Furthermore, it is demonstrated that with the help of our normalization method, we can create a higher quality Chinese brain template.
- Research Article
154
- 10.1109/tip.2011.2161482
- Feb 1, 2012
- IEEE Transactions on Image Processing
The neighbor-embedding (NE) algorithm for single-image super-resolution (SR) reconstruction assumes that the feature spaces of low-resolution (LR) and high-resolution (HR) patches are locally isometric. However, this is not true for SR because of one-to-many mappings between LR and HR patches. To overcome or at least to reduce the problem for NE-based SR reconstruction, we apply a joint learning technique to train two projection matrices simultaneously and to map the original LR and HR feature spaces onto a unified feature subspace. Subsequently, the k -nearest neighbor selection of the input LR image patches is conducted in the unified feature subspace to estimate the reconstruction weights. To handle a large number of samples, joint learning locally exploits a coupled constraint by linking the LR-HR counterparts together with the K-nearest grouping patch pairs. In order to refine further the initial SR estimate, we impose a global reconstruction constraint on the SR outcome based on the maximum a posteriori framework. Preliminary experiments suggest that the proposed algorithm outperforms NE-related baselines.
- Conference Article
14
- 10.1109/isbi.2019.8759153
- Jan 1, 2019
Deep learning techniques have shown promising outcomes in single image super-resolution (SR) reconstruction from noisy and blurry low resolution data. The SR reconstruction can cater the fundamental limitations of transmission electron microscopy (TEM) imaging to potentially attain a balance among the trade-offs like imaging-speed, spatial/temporal resolution, and dose/exposure-time, which is often difficult to achieve simultaneously otherwise. In this work, we present a convolutional neural network (CNN) model, utilizing both local and global skip connections, aiming for 4× SR reconstruction of TEM images. We used exact image pairs of a calibration grid to generate our training and independent testing datasets. The results are compared and discussed using models trained on synthetic (downsampled) and real data from the calibration grid. We also compare the variants of the proposed network with well-known classical interpolations techniques. Finally, we investigate the domain adaptation capacity of the CNN-based model by testing it on TEM images of a cilia sample, having different image characteristics as compared to the calibration-grid.
- Conference Article
3
- 10.1109/iwssip55020.2022.9854432
- Jun 1, 2022
Reference super-resolution (RefSR) has achieved promising results in the single image super-resolution (SISR) field by providing additional details from the reference images. Existing RefSR methods usually tend to extract similar or aligned features from reference images to further enhance the resolution of the final result. Therefore, the efficiency of RefSR models highly depends on the conformity between extracted features from the low-resolution (LR) and reference images. In this paper, we propose a new reference image generation scheme via semantic style transfer to unleash our model from relevant feature extraction computations. The generated reference images have the most content similarity and identical alignment with the LR input that compensates for the lost details of the LR images. Despite previous RefSR methods that rely on extracting and transferring texture information from the reference image to LR input, provided reference images are enriched with the style information of high-resolution (HR) images. Extensive experiments indicate the effectiveness of the proposed reference images.
- Research Article
3
- 10.3390/electronics11071064
- Mar 28, 2022
- Electronics
Online learning is a method for exploiting input data to update deep networks in the test stage to derive potential performance improvement. Existing online learning methods for single-image super-resolution (SISR) utilize an input low-resolution (LR) image for the online adaptation of deep networks. Unlike SISR approaches, reference-based super-resolution (RefSR) algorithms benefit from an additional high-resolution (HR) reference image containing plenty of useful features for enhancing the input LR image. Therefore, we introduce a new online learning algorithm, using several reference images, which is applicable to not only RefSR but also SISR networks. Experimental results show that our online learning method is seamlessly applicable to many existing RefSR and SISR models, and that improves performance. We further present the robustness of our method to non-bicubic degradation kernels with in-depth analyses.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.