Abstract

An adaptive infrared and visible image fusion method based on visual saliency and hierarchical Bayesian (AVSHB) which preserves the highest similarity between fused images and source images is proposed in this paper. Firstly, an effective salient edge preserving filter named SEPF is developed to decompose each source image into a base layer and a detail layer. A ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> norm gradient minimization is firstly derived and then embedded into a two-scale acceleration scheme in SEPF. Benefiting from the SEPF, the edges of salient regions can be preserved without distortion. Then, an adaptive fusion scheme is proposed which fully considers the characteristics of each layer. More concretely, we design a two-scale fusion strategy based on a visual saliency map (VSM) for the base layers, and a hierarchical Bayesian fusion model is derived for the detail layers. The experimental results on the TNO and RoadScene datasets and Nato_camp image sequence demonstrate that AVSHB favorably outperforms 16 related state-of-the-art fusion methods qualitatively and quantitatively. AVSHB can generate improved fusion results with sufficiently retaining salient targets and rich details from the source images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call