Abstract

Most of the infrared and visible image fusion methods need to decompose the source image into two parts during the fusion process, and this will lead to deficient extraction of details and salient targets. In order to solve this problem, we first analyze VGG-19 and reserve five convolutional layers we need under the guidance of transfer learning. Then the fusion method proposed in this paper directly inputs the source image into the five layers for feature extraction. After that, we can obtain the activity level maps by L1-norm and average operator. On this basis, softmax function and up-sampling operator are used to obtain the weight maps. Then the final weight maps are convolved with infrared and visible images respectively to get five candidate fused images. Finally, we choose the maximum value from the five candidates for each position as the reconstruction of the final fused image. Experimental results show that the proposed method has better visual quality and less artifact and noise. It is also overwhelming in objective evaluation than some traditional or popular fusion methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.