Abstract

In multi-modality image fusion, source image decomposition, such as multi-scale transform (MST), is a necessary step and also widely used. However, when MST is directly used to decompose source images into high- and low-frequency components, the corresponding decomposed components are not precise enough for the following infrared-visible fusion operations. This paper proposes a non-subsampled contourlet transform (NSCT) based decomposition method for image fusion, by which source images are decomposed to obtain corresponding high- and low-frequency sub-bands. Unlike MST, the obtained high-frequency sub-bands have different decomposition layers, and each layer contains different information. In order to obtain a more informative fused high-frequency component, maximum absolute value and pulse coupled neural network (PCNN) fusion rules are applied to different sub-bands of high-frequency components. Activity measures, such as phase congruency (PC), local measure of sharpness change (LSCM), and local signal strength (LSS), are designed to enhance the detailed features of fused low-frequency components. The fused high- and low-frequency components are integrated to form a fused image. The experiment results show that the fused images obtained by the proposed method achieve good performance in clarity, contrast, and image information entropy.

Highlights

  • Both infrared and visible images are widely used in daily life

  • The proposed non-subsampled contourlet transform (NSCT)-based fusion framework is compared with seven popular fusion methods, such as the adaptive spare representation (ASR) based image fusion method proposed by Liu [28], the convolutional neural network (CNN) based image fusion method proposed by Liu [29], the multi-channel medical image fusion (CT) proposed by Zhu [25], the multi-modality image fusion method with joint patch clustering based dictionary learning (KIM) proposed by Kim [30], the image fusion based on multi-scale transform and sparse representation (MST-SR) proposed by Liu [10], a novel infrared and visual image fusion algorithm based on non-downsampling shear transform (NSST) and improved pulse coupled neural network (PCNN) (NSST-PCNN)

  • The fused image obtained by NSCT-PCNN has low contrast, and the global image features have poor performance

Read more

Summary

Introduction

Both infrared and visible images are widely used in daily life. Due to the difference in wavelength, infrared and visible light contain different image information. Infrared images can reflect all the objects that emit infrared radiation. Visible-light images can provide the scene details. No matter whether an infrared or visible-light image, it is difficult for an image captured by a single shot to contain all-in-focus images in one scene. Infrared-visible fusion techniques can effectively combine the complementary information, which are the indicative features and detailed information extracted from infrared and visible images, respectively [1]. In the fused infrared-visible image, the targeted item can be highlighted and the corresponding indicative features as well as detailed information are retained. Image fusion techniques as a type of image pre-processing methods, especially

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call