Abstract

Infrared and visible images come from different sensors, and they have their advantages and disadvantages. In order to make the fused images contain as much salience information as possible, a practical fusion method, termed EDAfuse, is proposed in this paper. In EDAfuse, the authors introduce an encoder–decoder with the atrous spatial pyramid network for infrared and visible image fusion. The authors use the encoding network which includes three convolutional neural network (CNN) layers to extract deep features from input images. Then the proposed atrous spatial pyramid model is utilized to get five different scale features. The same scale features from the two original images are fused by our fusion strategy with the attention model and information quantity model. Finally, the decoding network is utilized to reconstruct the fused image. In the training process, the authors introduce a loss function with saliency loss to improve the ability of the model for extracting salient features from original images. In the experiment process, the authors use the average values of seven metrics for 21 fused images to evaluate the proposed method and the other seven existing methods. The results show that our method has four best values and three second-best values. The subjective assessment also demonstrates that the proposed method outperforms the state-of-the-art fusion methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.