Abstract

In this work, we propose a novel multiscale transform decomposition model for multi-focus image fusion to get a better fused performance. The motivation of the proposed fusion framework is to make full use of the decomposition characteristics of multiscale transform. The nonsubsampled contourlet transform (NSCT) is firstly used to decompose the source multi-focus images into low-frequency (LF) and several high-frequency (HF) bands to separate out the two basic characteristics of source images, i.e., principal information and edge details. The common “average” and “max-absolute” fusion rules are performed on low- and high-frequency components, respectively, and a basic fusion image is generated. Then the difference images between the basic fused image and the source images are calculated, and the energy of the gradient (EOG) of difference images are utilized to refine the basic fused image by integrating average filter and median filter. Visual and quantitative using fusion metrics like VIFF, QS, MI, QAB/F, SD, QPC and running time comparisons to state-of-the-art algorithms demonstrate the out-performance of the proposed fusion technique.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.