Abstract
The properties like pulse synchronization of neurons and global coupling greatly motivated researchers to apply pulse coupled neural network (PCNN) models in accomplishing image fusion. However, manual adjustment of its parameters negatively affects the fusion performance. Moreover, it can process one image at a time. A new parameter adaptive unit-linking dual-channel PCNN model that exhibits all the properties of PCNN and processes two images simultaneously is used in this work to implement a novel fusion algorithm in the non-subsampled contourlet transform (NSCT) domain for the integration of infrared and visible images. At the same time, all the parameters of the proposed model are automatically estimated from the source images. The infrared and visible images are first decomposed using NSCT to provide a sequence of band-pass directional sub-bands and a low-pass sub-band, respectively. The band-pass directional sub-bands are fused using fractal dimension-based linking strength, while the low-pass sub-bands are combined using a new linking strength based on the multi-scale morphological gradient of coefficients. Lastly, the fused image is constructed from the fused sub-bands by applying inverse NSCT. Fourteen state-of-the-art methods are adopted for comparing the performance of the proposed method. The qualitative comparison is done using the human visual system, whereas six objective metrics are considered for the quantitative evaluation. The proposed method is competitive and outperforms some of the existing methods, according to the results of the experiments.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have