Abstract

Multifocus image fusion aims at overcoming imaging cameras's finite depth of field by combining information from multiple images with the same scene. For the fusion problem of the multifocus image of the same scene, a novel algorithm is proposed based on multiscale products of the lifting stationary wavelet transform (LSWT) and the improved pulse coupled neural network (PCNN), where the linking strength of each neuron can be chosen adaptively. In order to select the coefficients of the fused image properly with the source multifocus images in a noisy environment, the selection principles of the low frequency subband coefficients and bandpass subband coefficients are discussed, respectively. For choosing the low frequency subband coefficients, a new sum modified-Laplacian (NSML) of the low frequency subband, which can effectively represent the salient features and sharp boundaries of the image in the LSWT domain, is an input to motivate the PCNN neurons; when choosing the high frequency subband coefficients, a novel local neighborhood sum of Laplacian of multiscale products is developed and taken as one type of feature of high frequency to motivate the PCNN neurons. The coefficients in the LSWT domain with large firing times are selected as coefficients of the fused image. Experimental results demonstrate that the proposed fusion approach outperforms the traditional discrete wavelet transform (DWT)-based, LSWT-based and LSWT–PCNN-based image fusion methods even though the source image is in a noisy environment in terms of both visual quality and objective evaluation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call