Abstract

This study proposes a novel unsupervised network for IR/VIS fusion task, termed as RXDNFuse, which is based on the aggregated residual dense network. In contrast to conventional fusion networks, RXDNFuse is designed as an end-to-end model that combines the structural advantages of ResNeXt and DenseNet. Hence, it overcomes the limitations of the manual and complicated design of activity-level measurement and fusion rules. Our method establishes the image fusion problem into the structure and intensity proportional maintenance problem of the IR/VIS images. Using comprehensive feature extraction and combination, RXDNFuse automatically estimates the information preservation degrees of corresponding source images, and extracts hierarchical features to achieve effective fusion. Moreover, we design two loss function strategies to optimize the similarity constraint and the network parameter training, thus further improving the quality of detailed information. We also generalize RXDNFuse to fuse images with different resolutions and RGB scale images. Extensive qualitative and quantitative evaluations reveal that our results can effectively preserve the abundant textural details and the highlighted thermal radiation information. In particular, our results form a comprehensive representation of scene information, which is more in line with the human visual perception system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.