Abstract

Semantic segmentation of roads in remote-sensing images is a challenging task. This paper proposes a semantic segmentation model, DANet, for remote-sensing image road semantic segmentation. The model addresses the problems of missing, misclassification, and strong jaggedness of segmented target edges faced by other semantic segmentation networks when dealing with complex and diverse remote-sensing images. The proposed model uses two ASPP structures for multi-scale feature fusion and combines the DarkNet network structure for downsampling with the SegNet network structure for upsampling. This improves the model’s ability to extract road feature information from remote-sensing images. Using the CHN–CUG Roads Dataset, we have confirmed that the proposed network structure, Re, has demonstrated a 1.15% improvement in accuracy compared to U-Net. Furthermore, the road IoU has shown a 1.09% enhancement compared to HRNet-V2. Additionally, there is a 1.13% increase in F1-score compared to U-Net.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call