Abstract
Land cover information is becoming more important in urban planning, change detection, and management. The fusion of point clouds and images increases the accuracy of land use classification by utilising the advantages of both modalities. Similar structures such as buildings and roads, low and high vegetation, and impervious and bare regions are not too much discriminative. Models fail to discriminate these classes leading to misclassifications, false detections, and unreliable land cover maps. Therefore, this research proposes the fusion of dense point clouds and multi-spectral images based on a dual-stream deep convolutional model by adding vegetation and elevation information to spectral information. To fuse both modalities' features, a dual-stream deep neural network based on Deeplabv3+ architecture is implemented. In addition, the Xception (Extreme Inception) model is considered as a backbone and feature extractor. The model performance is evaluated with F1-Score and Overall Accuracy. 93.4% Overall Accuracy and F1-Score are achieved after adding height and vegetation information to the model. Results indicate improvements in all indexes, meaning that data fusion with the proposed model outperforms the existing state-of-the-art models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.