Abstract

ABSTRACT With the rapid development of UAV remote sensing, satellite remote sensing and computer vision, the semantic segmentation of remote sensing images has also developed rapidly and is widely used in research on land utilization classification, ecology, urban planning and other problems. Large differences in spatial and temporal scales, different image resolutions, insufficient model robustness to the data domain, and blurred object boundaries are the main problems for existing semantic segmentation models based on deep learning. This paper studies the problem of blurred target boundaries after semantic segmentation and propose a boundary enhancement loss function that highlights the importance of target edges. Compared to other models used in investigating higher boundary accuracy, the proposed model can be trained without boundary-labelled data, and no additional inference time is consumed. This loss function is applied to some other deep learning networks as a plug-and-play module on two different datasets and show that the IoU has an improvement of 2–5% with better clear boundary and continuity, which is more prominent on buildings and roads.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call