Abstract

The effective utilization of hyperspectral image (HSI) and light detection and ranging (LiDAR) data is essential for land cover classification. Recently, deep learning-based classification approaches have achieved remarkable success. However, most deep learning classification methods are data-driven and designed in a black-box architecture, lacking sufficient interpretability, and ignoring the potential correlation of heterogeneous complementary information between multisource data. To address these issues, we propose an interpretable deep neural network, namely multisource aligning joint contextual representation model-informed interpretable classification network (MACRMoI-N), which fully exploits correlation of multisource data by aligning complementary spectral-spatial-elevation information during end-to-end training. We first present a multimodal aligning joint contextual representation classification model (MACR-M), which incorporates local spatial-spectral prior information into representation. MACR-M is optimized by an iterative algorithm to solve dictionaries of HSI and LiDAR and their corresponding sparse coefficients, in which the dictionary distribution are aligned to enable the complementary information of multisource data to guide a more accurate classification. We further propose the unfolded MACRMoI-N, where each module corresponds to a specific operation of the optimization algorithm, and the parameters are optimized in an end-to-end manner. Comparative experiment results and ablation studies show that MACRMoI-N performs better than other advanced methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call