Abstract
Pixel-level image fusion, which merges different modal images into an informative image, has attracted more and more attention. Despite many methods that have been proposed for pixel-level image fusion, there is a lack of effective image fusion methods that can simultaneously deal with different tasks. To address this problem, we propose a saliency guided deep-learning framework for pixel-level image fusion called SGFusion, which is an end-to-end fusion network and can be applied to a variety of fusion tasks by training one model. In specific, the proposed network uses the dual-guided encoding, image reconstruction decoding, and the saliency detection decoding processes to simultaneously extract the feature maps and saliency maps in different scales from the image. The saliency detection decoding is used as fusion weights to merge the features of image reconstruction decoding for generating the fusion image, which can effectively extract meaningful information from the source images and make the fusion image more in line with visual perception. Experiments indicate that the proposed fusion method achieves state-of-the-art performance in infrared and visible image fusion, multi-exposure image fusion, and medical image fusion on various public datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.