Abstract

ABSTRACTImage fusion is an important task in both image processing and computer vision research that use multisensor processing and multiscale analysis. This paper proposed a novel image fusion algorithm using a nonsubsampled contourlet transform (NSCT) and a pulse-coupled neural network (PCNN) with digital filtering. First, we decomposed two original images into a low-frequency and a series of high-frequency subband coefficients based on the NSCT and repeated that step for the next low-frequency subband. Second, each low-frequency subband coefficient in different levels in the frequency domain for both images was duplicated, and then these low-frequency subband coefficients of different levels from two different images were processed through a Laplacian filter and an average filter. The Laplacian filter can improve the performance of both edge and texture representation; the average filter can implement image smoothing for creating a superior reconstruction of an image via the low-frequency subband coefficients of the frequency domain in image processing. Moreover, the coupling coefficients from different images were fused by using the PCNN. Finally, reconstructed a fused image based on low- and high-frequency subband coefficients in different scales and directions using the inverse NSCT. Experimental results show that the proposed algorithm is superior to state-of-the-art conventional image fusion algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call