Abstract

Polarization image fusion aims to integrate the polarization information and intensity images. Existing methods ignore the modal variance and salient information differences between polarization and intensity images. Their fusion results are typically damaged by preserving the redundant information in the polarization image. In this paper, we propose a novel unsupervised polarization and intensity image fusion network via pixel information guidance and attention mechanism, named <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">PAPIF</i> . We define the information expected to be fused as highly polarized targets in polarization images and rich textures in intensity images. To enhance the preservation of different types of salient information at the pixel level, we design a loss function to perform the pixel distribution constraint between the fused image and source images. Moreover, aiming at the problem that the distributions of polarization information and textures are not consistent, we introduce the attention mechanism into the fusion module. The channel and spatial attention mechanisms fuse and retain salient information and suppress redundant information. Thus, our fusion result can exhibit richer polarization information with more appropriate brightness than existing methods. Experiments on both RGB and gray datasets demonstrate the superiority of our method over the state-of-the-art methods both qualitatively and quantitatively. Our code is publicly available at https://github.com/hanna-xu/PAPIF.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call