Abstract

The fusion of infrared and visible images combines the information from two complementary imaging modalities for various computer vision tasks. Many existing techniques, however, fail to maintain a uniform overall style and keep salient details of individual modalities simultaneously. This paper presents an end-to-end Laplacian Pyramid Fusion Network with hierarchical guidance (HG-LPFN) that takes advantage of pixel-level saliency reservation of Laplacian Pyramid and global optimization capability of deep learning. The proposed scheme generates hierarchical saliency maps through Laplacian Pyramid decomposition and modal difference calculation. In the pyramid fusion mode, all sub-networks are connected in a bottom-up manner. The sub-network for low-frequency fusion focuses on extracting universal features to produce an opposite style while sub-networks for high-frequency fusion determine how much the details of each modality will be retained. Taking the style, details, and background into consideration, we design a set of novel loss functions to supervise both low-frequency images and full-resolution images under the guidance of saliency maps. Experimental results on public datasets demonstrate that the proposed HG-LPFN outperforms the state-of-the-art image fusion techniques.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.