Abstract

For multi-source image pixel-wise classification, each image information is different and complementary in the same area or scene. However, how to integrate them for decision-making is a difficult problem. In this paper, we focus on the characteristics of multi-source image and propose a novel pixel-wise classification method, named deep multi-level fusion network. The proposed method is to classify multi-sensor data including very high-resolution (VHR) RGB imagery, hyperspectral imagery (HSI) and multispectral light detection and ranging (MS-LiDAR) point cloud data. First, a deep spectral–spatial attention network is proposed to process HSI and MS-LiDAR images and get a learned classification map, which is based on feature level fusion. Next, a down-superpixel segmentation algorithm is proposed to get a segmentation result for VHR RGB imagery. Finally, the feature level fusion results are refinement by the down-superpixel segmentation results on the decision level, and get the final result. Extensive experiments and analyses on the data set grss_dfc_2018 demonstrate that the proposed multi-level fusion network can achieve a better result in the multi-source image pixel-wise classification.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.