Abstract

Total variation (TV) regularization can very well remove noise and simultaneously preserve the sharp edges. But it has the drawback of the contrast loss in the restoration. In this paper, we first theoretically analyze the loss of contrast in the original TV regularization model, and then propose a forward-backward diffusion model in the framework of total variation, which can effectively preserve the edges and contrast in TV image denoising. A backward diffusion term based on a nonconvex and monotony decrease potential function is introduced in the TV energy, resulting in a forward-backward diffusion. In order to finely control the strength of the forward and backward diffusion, and separately design the efficient algorithm to numerically implement the forward and backward diffusion, we propose a two-step splitting method to iteratively solve the proposed model. We adopt the efficient projection algorithm in the dual framework to solve the forward diffusion in the first step, and then use the simple finite differences scheme to solve the backward diffusion to compensate the loss of contrast occurred in the previous step. At last, we test the models on both synthetic and real images. Compared with the classical TV, forward and backward diffusion (FBD), two-step methods (TSM), and TV-FF models, our model has the better performance in terms of peak signal-to-noise ratio (PSNR) and mean structural similarity (MSSIM) indexes.

Highlights

  • Image denoising plays an important role in various applied areas, such as pattern recognition, medical imaging, remote sensing, video processing, and so on

  • The reason why we choose these models to compare in that the total variation (TV) model [12] is the most original variational model in image denoising, which is the source of our study in this paper, and the other three models are representations of the three major classes of contrast preserving in TV regularization, respectively

  • The forward and backward diffusion (FBD) model performs significantly better since it adopts a linear backward diffusion in the diffusion partial differential equation (PDE), which can compensate the loss of contrast caused by forward diffusion

Read more

Summary

Introduction

Image denoising plays an important role in various applied areas, such as pattern recognition, medical imaging, remote sensing, video processing, and so on. Some two-step methods firstly remove the noise by TV regularization, and enhance the contrast in the restoration obtained from the previous regularization step by some classical enhancement methods, such as histogram equalization or gray-scale transformation algorithms. These methods have the drawbacks of losing weak edges. Guy Gilboa et al [23] proposed forward and backward diffusion (FBD) to simultaneously remove the noise and enhance the contrast. In this paper, based on variation method and backward diffusion, we propose a forward-backward diffusion model in the framework of TV (called TV-FBD).

The loss of contrast in TV regularization
Backward diffusion model
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.