Abstract

The existing deep learning based infrared and visible image fusion technologies have made significant progress, but there are still many problems need to be solved, such as information loss (targets and texture, etc.) of both infrared and visible images, noise and artifacts existing in fused image. To address these issues in fusion, an infrared and visible image fusion method based on autoencoder network is proposed in this paper. Firstly, novel enhanced channels are designed and input parallelly with source images into the network to enhance the specific features and reduce information loss in feature fusion. Then, the feature maps are obtained by the encoder. Next, a feature fusion method based on feature saliency is proposed, using a pre-trained classifier to measure the saliency of features, and the fused image is obtained by the decoder finally. Experimental results demonstrate that the targets are obvious and the textures are plentiful in the fused images generated by the proposed method. Also, the objective metrics of the proposed method are higher than the state of the art methods, which demonstrate that the proposed method is effective to fuse the infrared and visible images.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.