Abstract

Infrared and visible image fusion is playing an important role in robot perception. The key of fusion is to extract useful information from source image by appropriate methods. In this paper, we propose a deep learning method for infrared and visible image fusion based on region segmentation. Firstly, the source infrared image is segmented into foreground part and background part, then we build an infrared and visible image fusion network on the basis of neural style transfer algorithm. We propose foreground loss and background loss to control the fusion of the two parts respectively. And finally the fused image is reconstructed by combining the two parts together. The experimental results show that compared with other state-of-art methods, our method retains both saliency information of target and detail texture information of background.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call