Abstract

In recent years, remote sensing researchers have investigated the use of different modalities (or combinations of modalities) for classification tasks. Such modalities can be extracted via a diverse range of sensors and images. Currently, there are no (or only a few) studies that have been done to increase the land cover classification accuracy via unmanned aerial vehicle (UAV)–digital surface model (DSM) fused datasets. Therefore, this study looks at improving the accuracy of these datasets by exploiting convolutional neural networks (CNNs). In this work, we focus on the fusion of DSM and UAV images for land use/land cover mapping via classification into seven classes: bare land, buildings, dense vegetation/trees, grassland, paved roads, shadows, and water bodies. Specifically, we investigated the effectiveness of the two datasets with the aim of inspecting whether the fused DSM yields remarkable outcomes for land cover classification. The datasets were: (i) only orthomosaic image data (Red, Green and Blue channel data), and (ii) a fusion of the orthomosaic image and DSM data, where the final classification was performed using a CNN. CNN, as a classification method, is promising due to hierarchical learning structure, regulating and weight sharing with respect to training data, generalization, optimization and parameters reduction, automatic feature extraction and robust discrimination ability with high performance. The experimental results show that a CNN trained on the fused dataset obtains better results with Kappa index of ~0.98, an average accuracy of 0.97 and final overall accuracy of 0.98. Comparing accuracies between the CNN with DSM result and the CNN without DSM result for the overall accuracy, average accuracy and Kappa index revealed an improvement of 1.2%, 1.8% and 1.5%, respectively. Accordingly, adding the heights of features such as buildings and trees improved the differentiation between vegetation specifically where plants were dense.

Highlights

  • In the past few years, unmanned aerial vehicles (UAVs) have been extensively used to collect image data over inaccessible/remote areas [1,2,3]

  • Dataset fusion of RGB (Red, Green and Blue) images obtained from UAVs or other sources together with elevation information from digital surface models (DSM) provided a more holistic representation for the construction of accurate maps [11]

  • Based on the success of previous studies [11,15,16,17], this paper examines feature fusion but for the specific task of land cover classification taking advantage of fused DSM–UAV images

Read more

Summary

Introduction

In the past few years, unmanned aerial vehicles (UAVs) have been extensively used to collect image data over inaccessible/remote areas [1,2,3]. Images captured using UAVs are used for geographical information system databases, datasets for automated decision-making, agricultural mapping, urban planning, land use and land cover detection and environmental monitoring and assessment [1,5,6,7] Such images are commonly used in supervised machine learning-based classification tasks as training data [8,9,10]. Jahan et al [11] fused different LiDAR and hyperspectral datasets, and their derivatives, and proved that the overall accuracy of the fused datasets are higher than the single dataset Another fusion of LiDAR and aerial colour images was performed to enhance building and vegetation detection [11]. Considering DSMs as additional features was shown to improve classification results for image segmentation [14]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call