Abstract

Abstract Urban land cover mapping with very-high-resolution (VHR) satellite images has raised many concerns in the fields of environmental and social investigations, but the classical per-pixel and object-based mapping results are not accurate enough to serve these applications, owing to the inappropriate analysis scales. Accordingly, this study aims to 1) propose a self-adaptive segmentation scale (i.e., “selfhood scale”), which refers to the optimum scale for analyzing a pixel and depends on the pixel's category and surrounding contrasts; and 2) apply selfhood scales to urban land cover mapping. For the first target, a learning mechanism is presented to estimate selfhood scales; for the second target, two methods (i.e., “zipper merging” and “restricted forest”) using the learned selfhood scales are proposed to respectively improve per-pixel and object-based classification results. The experimental results demonstrate that these methods achieve significant improvements in both per-pixel and object-based land cover mapping in urban areas as their overall accuracies are improved by 2.8% and 7.6% respectively. Moreover, the proposed methods are further used to generate land cover maps in Beijing and Zhuhai cities, which perform much better than the classical methods. Accordingly, it can be concluded that selfhood scales are effective to improve land cover mapping in urban areas.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call