Abstract

Multi-modal image fusion is widely used in various fields, especially in military, medical, industrial detection and other fields. Image fusion can integrate redundant and complementary information of two or more multi-modal images into one image, so that the fused image contains more useful information. In this paper, we construct a novel dataset for Multi-modal Image Fusion Applications (MOFA), including four modals: visible, near-infrared (NIR), long-wavelength infrared (LWIR) and polarization. The MOFA dataset contains 1062 images of 118 groups, in which 450 are indoor and 612 are outdoor. The dataset is applied to different image fusion applications, including general multi-modal image fusion, fusion based image super-resolution and image restoration. Multiple image fusion methods are compared and analyzed on this dataset with a qualitative assessment of subjective and objective metrics. Based on the experiments, the advantages and disadvantages of different methods are discussed. Moreover, the challenging problems of image fusion are concluded.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.