Abstract

Land cover information is becoming more important in urban planning, change detection, and management. The fusion of point clouds and images increases the accuracy of land use classification by utilising the advantages of both modalities. Similar structures such as buildings and roads, low and high vegetation, and impervious and bare regions are not too much discriminative. Models fail to discriminate these classes leading to misclassifications, false detections, and unreliable land cover maps. Therefore, this research proposes the fusion of dense point clouds and multi-spectral images based on a dual-stream deep convolutional model by adding vegetation and elevation information to spectral information. To fuse both modalities' features, a dual-stream deep neural network based on Deeplabv3+ architecture is implemented. In addition, the Xception (Extreme Inception) model is considered as a backbone and feature extractor. The model performance is evaluated with F1-Score and Overall Accuracy. 93.4% Overall Accuracy and F1-Score are achieved after adding height and vegetation information to the model. Results indicate improvements in all indexes, meaning that data fusion with the proposed model outperforms the existing state-of-the-art models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call