Abstract

Traditional image fusion focuses on selecting an effective decomposition approach to extract representative features from the source image and attempts to find appropriate fusion rules to merge extracted features respectively. However, the existing image decomposition tools are mostly based on kernels or global energy-optimized functions limiting the performance of the wide range of image contents. This paper proposes a novel infrared and visible image fusion method based on deep decomposition network and saliency analysis (named DDNSA). First, the modified residual dense network (MRDN) is trained with a publicly available dataset to learn the decomposition process. Second, the structure and texture features of source images are separated by the trained decomposition network. Then, according to the characteristics of the above features, we construct the combination of local and global saliency maps by using stacked sparse autoencoder and visual saliency mechanism to fuse the structural features. Besides, we propose a bi-direction edge-strength fusion strategy for merging the texture features. Finally, the resultant image is reconstructed by combining the fused structure and texture features. The experimental results confirm that our proposed method outperforms the state-of-the-art methods in both visual perception and objective evaluation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.