Abstract

Robust semantic labeling of high-resolution remote sensing images in foggy conditions is crucial for automatic monitoring of land covers. This remains a challenging task owing to the low inter-class differentiation yet high intra-class variance and geometric size diversity. Although conventional Convolutional Neural Networks have demonstrated state of the art performance in semantic segmentation, most networks are primarily concerned with standard accuracy, while the influence on robustness is rarely explored. This letter proposes a reliable framework which is evaluated across various severity levels of fog corruptions. Utilizing HRNet as the backbone to maintain high-resolution representations, we develop a multimodal fusion module to exploit the complementary information of lidar and multispectral data. Based on the evaluation experiment on fog corrupted ISPRS 2D datasets, our model demonstrates promising performance with an average mIoU on the clean along with the corrupted datasets exceeding 80% and 56% respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call