Abstract
Multisensor data fusion has become a hot topic in the remote sensing research community. This is thanks to significant technological advances and the ability to extract information that would have been challenging with a single sensor. However, sensory enhancement requires advanced analysis that enables deep learning. A framework is designed to effectively fuse hyperspectral and lidar data for semantic segmentation in the urban environment. Our work proposes a method of reducing dimensions by exploring the most representative features from hyperspectral and lidar data and using them for supervised semantic segmentation. In addition, we chose to compare segmentation models based on 2D and 3D convolutional operations with two different model architectures, such as U-Net and ResU-Net. All algorithms have been tested with three loss functions: standard Categorical Cross-Entropy, Focal Loss and a combination of Focal Loss and Jaccard Distance—Focal–Jaccard Loss. Experimental results demonstrated that the 3D segmentation of U-Net and ResU-Net with Focal and Focal–Jaccard Loss functions had significantly improved performance compared to the standard Categorical Cross-Entropy models. The results show a high accuracy score and reflect reality by preserving the complex geometry of the objects.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.