Abstract
In this study, a classification and performance evaluation framework for the recognition of urban patterns in medium (Landsat ETM, TM and MSS) and very high resolution (WorldView-2, Quickbird, Ikonos) multi-spectral satellite images is presented. The study aims at exploring the potential of machine learning algorithms in the context of an object-based image analysis and to thoroughly test the algorithm’s performance under varying conditions to optimize their usage for urban pattern recognition tasks. Four classification algorithms, Normal Bayes, K Nearest Neighbors, Random Trees and Support Vector Machines, which represent different concepts in machine learning (probabilistic, nearest neighbor, tree-based, function-based), have been selected and implemented on a free and open-source basis. Particular focus is given to assess the generalization ability of machine learning algorithms and the transferability of trained learning machines between different image types and image scenes. Moreover, the influence of the number and choice of training data, the influence of the size and composition of the feature vector and the effect of image segmentation on the classification accuracy is evaluated.
Highlights
Extraction of information on the built environment from remote sensing imagery is a complex task, mainly due to the manifold combinations of surface materials and the diversity of size, shape and placement of the objects composing a typical image scene
It is increasingly being recognized that image domains beyond spectral information, such as geometrical, temporal or textural domains must be utilized in order to tackle the complexity of the information extraction
This paper described a method for the recognition of urban patterns at different spatial scales in Medium Resolution (MR) and Very High Resolution (VHR) multi-spectral satellite images using machine learning algorithms in the context of a state-of-the-art object-based image analysis
Summary
Extraction of information on the built environment from remote sensing imagery is a complex task, mainly due to the manifold combinations of surface materials and the diversity of size, shape and placement of the objects composing a typical image scene. It is increasingly being recognized that image domains beyond spectral information, such as geometrical, temporal or textural domains must be utilized in order to tackle the complexity of the information extraction In this regard, algorithms that make use of the extended information content of image segments ( referred to as super-pixels or objects) have been adapted by the remote sensing community in recent years [1]. Algorithms that make use of the extended information content of image segments ( referred to as super-pixels or objects) have been adapted by the remote sensing community in recent years [1] Such an object-based image analysis emerged primarily in the context of Very High Resolution (VHR) image analysis (Ground Sampling Distance (GSD) < 1–4 m), where it showed advantages over pixel-based approaches [2] for extracting detailed thematic information such as single buildings or streets [3,4]. This includes spectral features such as mean and standard deviation values per image band, minimum and maximum pixel values, mean band ratios or mean and standard deviation of band indices such as the Normalized Difference Vegetation
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.