Abstract

Land cover classification (LCC) is an important application in remote sensing data interpretation and invariably faces big intra-class variance and sample imbalance in remote sensing images. The optical image is obtained by satellites capturing the spectral information of the Earth’s surface, and the synthetic aperture radar (SAR) image is produced by the satellite actively transmitting and receiving the electromagnetic wave signals reflected from land covers. Because of the limitations of the optical image, a single modality (optical image) might be disturbed by external conditions, especially complex weather. Using heterogeneous SAR and optical images for LCC can reduce the negative impact caused by single-modal data damage, and multi-modal data can also be used as supplementary information to enhance classification accuracy. However, general LCC methods mainly focus on remote sensing data of a single modality without fully considering the multi-modalities of land covers. Therefore, we propose a dual-stream deep high-resolution network (DDHRNet) to deeply integrate SAR and optical data at the feature level in every branch. The network can effectively exploit the complementary information in heterogeneous images. It improves classification performance and achieves significant improvements in the classification of clouded images. A multi-modal squeeze-and-excitation (MSE) module is also utilized to fuse the features. Compared with the ordinary methods, MSE modules can lead to an improvement of about 1% to 5% in overall accuracy (OA), Kappa coefficients, and mean intersection over union (mIoU). Besides, in order to evaluate our method, we describe in detail the preprocessing process of Gaofen-2 (GF2) and Gaofen-3 (GF3) data before they are used in the LCC task. The experiments show that the proposed method performs well compared with other excellent segmentation methods and obtains the best performance on heterogeneous images from GF2 and GF3. The code and datasets are available at: https://github.com/XD-MG/DDHRNet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call