Abstract

Image fusion aims at aggregating the redundant and complementary information in multiple original images, the most challenging aspect is to design robust features and discriminant model, which enhances saliency information in the fused image. To address this issue, the authors develop a novel image fusion algorithm for preserving the invariant knowledge of the multimodal image. Specifically, they formulate a novel unified architecture based on non-subsampled contourlet transform (NSCT). Their method introduces Quadtree decomposition and Bezier interpolation to extract crucial infrared features. Furthermore, they propose a saliency advertising phase congruency-based rule and local Laplacian energy-based rule for low- and high-pass sub-bands fusion, respectively. In this approach, the fusion image could not only combine the local and global features of the source image to avoid smoothing the edge of the target, but also retain the minor scales details and resists the interference noise of the multi-modal image. Both objective assessments and subjective visions of experimental results indicate that the proposed algorithm performs competitively in both objective evaluation criteria and visual quality.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.