Abstract

Previous knowledge of the possible spatial relationships between land cover types is one factor that makes remote sensing image classification “smarter”. In recent years, knowledge graphs, which are based on a graph data structure, have been studied in the community of remote sensing for their ability to build extensible relationships between geographic entities. This paper implements a classification scheme considering the neighborhood relationship of land cover by extracting information from a graph. First, a graph representing the spatial relationships of land cover types was built based on an existing land cover map. Empirical probability distributions of the spatial relationships were then extracted using this graph. Second, an image was classified based on an object-based fuzzy classifier. Finally, the membership of objects and the attributes of their neighborhood objects were joined to decide the final classes. Two experiments were implemented. Overall accuracy of the two experiments increased by 5.2% and 0.6%, showing that this method has the ability to correct misclassified patches using the spatial relationship between geo-entities. However, two issues must be considered when applying spatial relationships to image classification. The first is the “siphonic effect” produced by neighborhood patches. Second, the use of global spatial relationships derived from a pre-trained graph loses local spatial relationship in-formation to some degree.

Highlights

  • Machine learning has been widely used in remote sensing image classification

  • Many researchers have flocked into the field of deep learning to seek breakthroughs in remote sensing image classification methods, precisely because traditional machine learning methods have shown bottlenecks that are difficult to break through [7,8,9,10,11]

  • We first built the pre-trained graph for spatial relationship extraction by selecting regions that contain all types of land cover to perform manual interpretation and classification

Read more

Summary

Introduction

Machine learning has been widely used in remote sensing image classification. Some studies have experimented on small regions, reaching an overall accuracy of more than90% [1,2,3,4,5]. Machine learning has been widely used in remote sensing image classification. There is still a large amount of remote sensing image classification requiring human interpretation or modification [6]. Traditional machine learning focuses on classification based on isolated information such as spectral, shape, and texture information for the extraction of ground features. Many researchers have flocked into the field of deep learning to seek breakthroughs in remote sensing image classification methods, precisely because traditional machine learning methods have shown bottlenecks that are difficult to break through [7,8,9,10,11]. Training for each sensor’s data and each geographic scene for remote sensing image classification is an unusually tough job for deep learning applications [12]. The knowledge of spatial relations within and between objects can be used as important knowledge in classification [13], because there are many variables regarding sensor characteristics such as band ranges, spatial resolution, and revisit cycles, the distribution of the ground features usually has a certain pattern

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call