Abstract

Abstract. Land-cover classification of Remote Sensing (RS) data in urban area has always been a challenging task due to the complicated relations between different objects. Recently, fusion of aerial imagery and light detection and ranging (LiDAR) data has obtained a great attention in RS communities. Meanwhile, convolutional neural network (CNN) has proven its power in extracting high-level (deep) descriptors to improve RS data classification. In this paper, a CNN-based feature-level framework is proposed to integrate LiDAR data and aerial imagery for object classification in urban area. In our method, after generating low-level descriptors and fusing them in a feature-level fusion by layer-stacking, the proposed framework employs a novel CNN to extract the spectral-spatial features for classification process, which is performed using a fully connected multilayer perceptron network (MLP). The experimental results revealed that the proposed deep fusion model provides about 10% improvement in overall accuracy (OA) in comparison with other conventional feature-level fusion techniques.

Highlights

  • The diversification of geospatial data and the limitations of the Remote Sensing (RS) sensors have attracted the interest of many researchers in developing various data fusion algorithms with greater ability and efficiency (Goshtasby and Nikolov, 2007)

  • From the light detection and ranging (LiDAR) data, the intensity image, the normalized digital surface model (nDSM), and the first-pulse and last-pulse differentiation with 5 cm spatial resolution were considered as low-level features

  • For the sample data set, a 25×25×6 patch around each pixel as the obtained low-level feature fusion is used as the input of the convolutional neural network (CNN)

Read more

Summary

Introduction

The diversification of geospatial data and the limitations of the RS sensors have attracted the interest of many researchers in developing various data fusion algorithms with greater ability and efficiency (Goshtasby and Nikolov, 2007). LiDAR can provide height and shape information which is valuable for better describing the scene obtained by optical sensors only (Morsy et al, 2017). Since these source data have specific merits, numerous classification methods have been developed for fusion of VHR and LiDAR data, in the past two decades (Daneshtalab and Rastiveis, 2017; Xu et al, 2018). In this regard, the majority of these approaches are based on relatively simple or highly-customized decision rules that target subject classes or target objects based on specific elevation features, vegetation index, shape, or other information

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call