Abstract

High-quality fusion images with infrared and visible information contribute to intelligent and safe driving. In the infrared and visible fusion images, the useless noise information in infrared image makes the fusion image unclear, which leads to the loss of texture information from visible image. In order to solve this problem, we propose a novel two-stage network (i.e., SOSMaskFuse) that can effectively reduce the noise and extract the important thermal information in infrared images as well as display sufficient texture details of visible images. In the first stage of this network, a salient object segmentation (SOS) network is proposed. Infrared images are fed into the SOS network to obtain the corresponding binarization mask of the region of interest. In the second stage, in each layer of features extracted by the encoder network, the newly proposed IMV-F (infrared mask visible fusion) fusion strategy uses the mask to decompose both infrared and visible images into infrared-foreground, visible-foreground, infrared-background and visible-background, and then fuses the foreground and background parts separately into fused-foreground and fused-background. Finally, the decoder network reconstructs the fused features into the final fused image. Compared with eighteen competitive algorithms on three public datasets, the experimental results indicate that our proposed network can produce high quality fused images with clear background texture information while highlighting infrared thermal information. Our proposed SOSMaskFuse generally outperforms the eighteen compared methods from both the quantitative and qualitative perspectives.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call