Abstract

Object-based image analysis (OBIA) has been widely used for land use and land cover (LULC) mapping using optical and synthetic aperture radar (SAR) images because it can utilize spatial information, reduce the effect of salt and pepper, and delineate LULC boundaries. With recent advances in machine learning, convolutional neural networks (CNNs) have become state-of-the-art algorithms. However, CNNs cannot be easily integrated with OBIA because the processing unit of CNNs is a rectangular image, whereas that of OBIA is an irregular image object. To obtain object-based thematic maps, this study developed a new method that integrates object-based post-classification refinement (OBPR) and CNNs for LULC mapping using Sentinel optical and SAR data. After producing the classification map by CNN, each image object was labeled with the most frequent land cover category of its pixels. The proposed method was tested on the optical-SAR Sentinel Guangzhou dataset with 10 m spatial resolution, the optical-SAR Zhuhai-Macau local climate zones (LCZ) dataset with 100 m spatial resolution, and a hyperspectral benchmark the University of Pavia with 1.3 m spatial resolution. It outperformed OBIA support vector machine (SVM) and random forest (RF). SVM and RF could benefit more from the combined use of optical and SAR data compared with CNN, whereas spatial information learned by CNN was very effective for classification. With the ability to extract spatial features and maintain object boundaries, the proposed method considerably improved the classification accuracy of urban ground targets. It achieved overall accuracy (OA) of 95.33% for the Sentinel Guangzhou dataset, OA of 77.64% for the Zhuhai-Macau LCZ dataset, and OA of 95.70% for the University of Pavia dataset with only 10 labeled samples per class.

Highlights

  • Land use and land cover (LULC) information is essential for forest monitoring, climate change studies, and environmental and urban management [1,2,3,4]

  • The proposed method achieved the highest classification accuracy, with overall accuracy (OA) of 95.33% and κ of 0.94, considerably larger than those achieved by Object-based image analysis (OBIA)-support vector machine (SVM) (OA of 90.22% and κ of 0.89) and OBIA-random forest (RF) (OA of 88.20% and κ of 0.86)

  • The classification accuracy (OA of 91.10% and κ of 0.90) obtained by the standard convolutional neural networks (CNNs) was already larger than those by OBIA-SVM/RF. This result indicates that the spatial information extracted by CNN was helpful in LULC classification

Read more

Summary

Introduction

Land use and land cover (LULC) information is essential for forest monitoring, climate change studies, and environmental and urban management [1,2,3,4]. Remote sensing techniques are widely used for LULC investigation because of their capability to observe land surfaces routinely on a large scale. The most often used remotely sensed data are optical images, such as those from Landsat [5,6,7]. Synthetic aperture radar (SAR) images are used for LULC classification because of their weather independence [8,9,10,11,12]. Unlike optical data, which contain spectral information, SAR data characterize the structural and dielectric properties of ground targets [13]. Combination of optical and SAR data results in a comprehensive observation of ground targets, and has a great potential to improve the accuracy of LULC classification [14].

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call