Abstract

Image fusion is a process which combines information from two or more images of the same scene into a single image preserving important features from each. The objective of image fusion is to combine complementary information from multiple images into a single resultant image which is more informative, comprehensive, reliable and precise compared to each source images. Fusion is an effective tool within various fields such as remote sensing, robotics and medical applications. In the proposed method the source images were first decomposed using the shift invariant and directionally selective dual tree complex wavelet transform (DT-CWT) and then fusion rules namely, max and local energy were applied to combine low and high frequency coefficients respectively. The final fused image was obtained by applying inverse DT-CWT to the fused low and high frequency components. The obtained fused images were analyzed qualitatively as well as with various quantitative metrics namely mutual information (MI), structural similarity index (SSIM), entropy, standard deviation (STD) and average gradient (AVG) where both analysis show that the proposed scheme can provide more information and better results compared to the other methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.