Abstract

Infrared and visible image fusion needs to preserve both the salient target of the infrared image and the texture details of the visible image. Therefore, an infrared and visible image fusion method based on saliency detection is proposed. Firstly, the saliency map of the infrared image is obtained by saliency detection. Then, the specific loss function and network architecture are designed based on the saliency map to improve the performance of the fusion algorithm. Specifically, the saliency map is normalized to [0, 1], used as a weight map to constrain the loss function. At the same time, the saliency map is binarized to extract salient regions and nonsalient regions. And, a generative adversarial network with dual discriminators is obtained. The two discriminators are used to distinguish the salient regions and the nonsalient regions, respectively, to promote the generator to generate better fusion results. The experimental results show that the fusion results of our method are better than those of the existing methods in both subjective and objective aspects.

Highlights

  • Image fusion aims to utilize complementary information of two source images to synthesize a fusion image with a more comprehensive understanding of the scene [1, 2]. e infrared image can identify the target according to thermal radiation contrast, and the visible image can provide a clear image in line with the human visual system [3, 4]

  • A large number of infrared and visible image fusion methods have been proposed. ese methods can be divided into two categories: (i) traditional methods, which usually complete the fusion task based on mathematical transformation and manual design; (ii) deep learning-based methods, which usually use the specific loss function to optimize the neural network to obtain a fusion result [12]

  • Dataset and Training Details. e training dataset comes from the public infrared and visible dataset TNO, which is the most commonly used dataset in infrared and visible image fusion tasks. 28 images are selected from the dataset TNO to train the model in this paper; only 28 images are not enough to train a good model. erefore, the clipping strategy is carried to expand the training dataset, and each image is cropped into image patches with the size of 120 × 120

Read more

Summary

Introduction

Image fusion aims to utilize complementary information of two source images to synthesize a fusion image with a more comprehensive understanding of the scene [1, 2]. e infrared image can identify the target according to thermal radiation contrast, and the visible image can provide a clear image in line with the human visual system [3, 4]. E key of image fusion is to integrate the effective information and remove the redundant information of the source image to gain a better fusion image [10, 11]. For this purpose, a large number of infrared and visible image fusion methods have been proposed. Ese methods can be divided into two categories: (i) traditional methods, which usually complete the fusion task based on mathematical transformation and manual design; (ii) deep learning-based methods, which usually use the specific loss function to optimize the neural network to obtain a fusion result [12]. Mathematical Problems in Engineering this scheme is difficult to control the balance between the two discriminators [15]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call