Abstract
Abstract Remote sensing image fusion involves combining data from various remote sensing sensors or merging multiple images acquired by the same sensor but in different spectral bands to create a superior quality composite image, high resolution, multi-band, and multi-temporal remote sensing image through certain algorithms and technologies. By utilizing fusion techniques, remote sensing images of varying bands, resolutions, and phases can be combined to enhance the accuracy of ground object information. This paper introduces a remote sensing image fusion method that leverages deep learning for depth feature extraction. Swin Transformer makes full use of the characteristics of the relationship between adjacent pixels, and adaptively extracts panchromatic image (PAN) and multispectral image (MS) features through two adaptive feature extraction channels. High-quality satellite data is employed to confirm the efficacy of the suggested approach, with the findings demonstrating strong performance in ERGAS, SCC, SSIM, and PSNR, and other evaluation indexes and has good spectral information fidelity and spatial information integration degree.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.