Abstract

In the current image fusion techniques, typically dual-band images are fused to obtain a fused image with salient target information, or intensity and polarization images are fused to achieve an image with enhanced visual perception. However, the current lack of dual-band polarization image datasets and effective fusion methods pose significant challenges for extracting more information in a single image. To address these problems, we construct a dataset containing intensity and polarization images in the visible and near-infrared bands. Furthermore, we propose an end-to-end image fusion network using attention mechanisms and atrous spatial pyramid pooling to extract key information and multi-scale global contextual information. Moreover, we design efficient loss functions to train the network. The experiments verify that the proposed method achieves better performance than the state-of-the-art in both subjective and objective evaluations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.