Abstract

ABSTRACT Semantic segmentation of remote sensing images is an important but unsolved problem in the remote sensing society. Advanced image semantic segmentation models, such as DeepLabv3+, have achieved astonishing performance for semantically labeling very high resolution (VHR) remote sensing images. However, it is difficult for these models to capture the precise outlines of ground objects and explore the context information that revealing relationships among image objects for optimizing segmentation results. Consequently, this study proposes a semantic segmentation method for VHR images by incorporating deep learning semantic segmentation model (DeepLabv3+) and object-based image analysis (OBIA), wherein DSM is employed to provide geometric information to enhance the interpretation of VHR images. The proposed method first obtains two initial probabilistic labeling predictions using a DeepLabv3+ network on spectral image and a random forest (RF) classifier on hand-crafted features, respectively. These two predictions are then integrated by Dempster-Shafer (D-S) evidence theory to be fed into an object-constrained higher-order conditional random field (CRF) framework to estimate the final semantic labeling results with the consideration of the spatial contextual information. The proposed method is applied to the ISPRS 2D semantic labeling benchmark, and competitive overall accuracies of 90.6% and 85.0% are achieved for Vaihingen and Potsdam datasets, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call