Abstract
Given multiple source images of the same scene, image fusion integrates the inherent complementary information into one single image, and thus provides a more complete and accurate description. However, when the source images are of low-resolution, the resultant fused image can still be of low-quality, hindering further image analysis. To improve the resolution, a separate image super-resolution step can be performed. In this paper, we propose a novel framework for simultaneous image fusion and super-resolution. It is based on the use of sparse representations, and consists of three steps. First, the low-resolution source images are interpolated and decomposed into high- and low-frequency components. Sparse coefficients from these components are then computed and fused by using image fusion rules. Finally, the fused sparse coefficients are used to reconstruct a high-resolution fused image. Experiments on various types of source images (including magnetic resonance images, X-ray computed tomography images, visible images, infrared images, and remote sensing images) demonstrate the superiority of the proposed method both quantitatively and qualitatively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.