Abstract

Infrared and visible images are fused to obtain a high-quality visible image with salient infrared targets that can be used in other vision tasks. The fusion strategy is a key to achieving efficient image fusion. The strategies currently used in deep-learning-based image fusion methods are based on manual computation and are unlearnable, limiting further improvements in infrared and visible image fusion. In this article, FusionGRAM, a novel end-to-end infrared and visible image fusion framework, is proposed. It adaptively fuses visible detailed information and infrared thermal information. The framework is fully convolutional, and the parameters of each part can be learned. The proposed method applied dense connections with an attention mechanism and gradient residual, thereby improving the network’s ability to obtain critical information during feature extraction. Furthermore, pixel intensity loss and detail loss functions were used to train FusionGRAM. Fused features were reconstructed using four convolution layers to obtain an informative fused image. Ablation experiments and comparisons on public datasets showed that FusionGRAM outperformed current state-of-the-art fusion methods in subjective and objective evaluations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call