Abstract

The demand for semantic segmentation of ultra-high-resolution remote sensing images is becoming increasingly stronger in various fields, posing a great challenge with concern to the accuracy requirement. Most of the existing methods process ultra-high-resolution images using downsampling or cropping, but using this approach could result in a decline in the accuracy of segmenting data, as it may cause the omission of local details or global contextual information. Some scholars have proposed the two-branch structure, but the noise introduced by the global image will interfere with the result of semantic segmentation and reduce the segmentation accuracy. Therefore, we propose a model that can achieve ultra-high-precision semantic segmentation. The model consists of a local branch, a surrounding branch, and a global branch. To achieve high precision, the model is designed with a two-level fusion mechanism. The high-resolution fine structures are captured through the local and surrounding branches in the low-level fusion process, and the global contextual information is captured from downsampled inputs in the high-level fusion process. We conducted extensive experiments and analyses using the Potsdam and Vaihingen datasets of the ISPRS. The results show that our model has extremely high precision.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call