Abstract

Mixture noise reduction is a key pre-process that improves the hyperspectral images’ (HSI) quality and prepares them for the next processes. Mixture noise should be modeled as similar as possible to the real HSI noise model, which is a challenge for noise removal methods. These use noise model simplification assumptions that can take the method performance away from real scenarios. This paper adopts a new general model for the mixture noise which leads to one model selection framework. The optimization problem of the proposed method is formed using Bayesian risk in five different models and promoted to total variation regularized low-rank matrix factorization. The devised optimization problem is solved using augmented Lagrange multiplier and proximal gradient algorithms. Also, we compared the proposed method with other state-of-the-art methods to reduce the mixture noise of HSI, which is a combination of Gaussian (i.i.d or non-i.i.d), and sparse (e.g., stripe, deadline, impulse) noises. The results obtained on both real and synthetic HSI data sets show the proposed method’s performance superiority to other competing methods in both visual comparisons and quantitative evaluations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call