Abstract

Although the traditional image fusion method can obtain rich image results, obvious artificial noise and artifacts are often present in the resulting image. Fusion algorithms based on neural networks can avoid the shortcomings of traditional methods, but they are more complex and less flexible. In this study, we proposed a fusion method using the deep residual neural network ResNet152, which can not only effectively suppress artificial noise but also preserve the edge details of the image and improve the efficiency of the neural network. The proposed method is characterized by a multiscale transformation of an infrared image and visible light image in the optimized nonsubsampled contourlet transformation domain, and the deep residual neural network ResNet152 is used to extract the deep features of the low-pass component to guide the fusion of the low-pass component. The bandpass component is fused by taking the modulus maximum. This method can fully retain the global features and structural information of the source image in the result image. Compared to existing fusion methods using public test image sets, the experimental results show that on a subjective level, the fusion method creates sharper depth edges and fewer noise artifacts than traditional fusion methods. From an objective perspective, the average value for the results of the evaluation function is greater than that of other fusion methods.

Highlights

  • In fields such as the military, navigation, stealth weapon detection, and medical imaging [1]–[4], a variety of different imaging bands are typically necessary to monitor the target scene to obtain a more comprehensive visual understanding

  • The structure of this paper is as follows: In Section 2, we focus on the NSCT and residual network (ResNet)

  • EXPERIMENTAL SETTINGS The 21 sets of infrared and visible light images used in the experiment are all preregistered images provided by Toet [25] and others

Read more

Summary

Introduction

In fields such as the military, navigation, stealth weapon detection, and medical imaging [1]–[4], a variety of different imaging bands are typically necessary to monitor the target scene to obtain a more comprehensive visual understanding. Using cameras at different wavebands to acquire images can provide rich and detailed scene information. For specific observation scenarios, the imaging advantages of multiple image bands can be combined to show more detailed information. Image fusion technology has been extensively studied in the past several decades. The multiscale transformation method based on Laplacian [5], [6] and contrast pyramids [7], [8] was proposed for image decomposition. Liu et al [9] designed an image fusion method based on

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call