Abstract
Pansharpening aims at fusing the rich spectral information of multispectral (MS) images and the spatial details of panchromatic (PAN) images to generate a fused image with both high resolutions. In general, the existing pansharpening methods suffer from the problems of spectral distortion and lack of spatial detail information, which might prevent the accuracy computation for ground object identification. To alleviate these problems, we propose a Hybrid Attention mechanism-based Residual Neural Network (HARNN). In the proposed network, we develop an encoder attention module in the feature extraction part to better utilize the spectral and spatial features of MS and PAN images. Furthermore, the fusion attention module is designed to alleviate spectral distortion and improve contour details of the fused image. A series of ablation and contrast experiments are conducted on GF-1 and GF-2 datasets. The fusion results with less distorted pixels and more spatial details demonstrate that HARNN can implement the pansharpening task effectively, which outperforms the state-of-the-art algorithms.
Highlights
Remote sensing technology has played an important role in economic, political, military and other fields since the successful launch of the first human-made earth resources satellite
To evaluate the former mentioned objects, the following experiments were implemented on the dataset of MS and PAN images obtained by Gaofen-2 satellite (GF-2) and Gaofen-1 satellite (GF-1), of which the MS images consisted of four bands (Red, Green, Blue and Near Infrared Band) and had the image size of 6000*6000 and 4500*4500, respectively
We propose a hybrid attention mechanism based network (HARNN) for the pansharpening task, which is proved to have the ability of alleviating the problem of spectral distortion and sharpening the edge contour of the fused image
Summary
Remote sensing technology has played an important role in economic, political, military and other fields since the successful launch of the first human-made earth resources satellite. With the development of remote sensing technology, existing remote sensing satellites are able to obtain images with higher and higher spatial, temporal and spectral resolution [1]. Due to the restrictions of technical conditions and hardware limitations [2], optical remote sensing satellites can only provide high-resolution PAN images and low-resolution MS images. The existing pansharpening methods can be roughly divided into traditional fusion algorithms [7,8,9] and deep learning based fusion algorithms [10,11]. As the focus of this paper, deep learning based methods have been developed to refine the spatial resolution via substituting components [12,13], or transforming features into another vector space [14]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.