Abstract

With the increasing availability of multisource image data from Earth observation satellites, image fusion, a technique that produces a single image which preserves major salient features from a set of different inputs, has become an important tool in the field of remote sensing since usually the complete information cannot be obtained by a single sensor. In this article, we develop a new pixel-based variational model for image fusion using gradient features. The basic assumption is that the fused image should have a gradient that is close to the most salient gradient in the multisource inputs. Meanwhile, we integrate the inputs with the average quadratic local dispersion measure for the purpose of uniform and natural perception. Furthermore, we introduce a split Bregman algorithm to implement the proposed functional more effectively. To verify the effect of the proposed method, we visually and quantitatively compare it with the conventional image fusion schemes, such as the Laplacian pyramid, morphological pyramid, and geometry-based enhancement fusion methods. The results demonstrate the effectiveness and stability of the proposed method in terms of the related fusion evaluation benchmarks. In particular, the computation efficiency of the proposed method compared with other variational methods also shows that our method is remarkable.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call