Abstract

Most of the infrared and visible image fusion methods need to decompose the source image into two parts during the fusion process, and this will lead to deficient extraction of details and salient targets. In order to solve this problem, we first analyze VGG-19 and reserve five convolutional layers we need under the guidance of transfer learning. Then the fusion method proposed in this paper directly inputs the source image into the five layers for feature extraction. After that, we can obtain the activity level maps by L1-norm and average operator. On this basis, softmax function and up-sampling operator are used to obtain the weight maps. Then the final weight maps are convolved with infrared and visible images respectively to get five candidate fused images. Finally, we choose the maximum value from the five candidates for each position as the reconstruction of the final fused image. Experimental results show that the proposed method has better visual quality and less artifact and noise. It is also overwhelming in objective evaluation than some traditional or popular fusion methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call