Abstract
ABSTRACTIn human cognition, both visual features (i.e., spectrum, geometry and texture) and relational contexts (i.e. spatial relations) are used to interpret very-high-resolution (VHR) images. However, most existing classification methods only consider visual features, thus classification performances are susceptible to the confusion of visual features and the complexity of geographic objects in VHR images. On the contrary, relational contexts between geographic objects are some kinds of spatial knowledge, thus they can help to correct initial classification errors in a classification post-processing. This study presents the models for formalizing relational contexts, including relative relations (like alongness, betweeness, among, and surrounding), direction relation (azimuth) and their combination. The formalized relational contexts were further used to define locally contextual regions to identify those objects that should be reclassified in a post-classification process and to improve the results of an initial classification. The experimental results demonstrate that the relational contexts can significantly improve the accuracies of buildings, water, trees, roads, other surfaces and shadows. The relational contexts as well as their combinations can be regarded as a contribution to post-processing classification techniques in GEOBIA framework, and help to recognize image objects that cannot be distinguished in an initial classification.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.