Abstract

Joint use of multisensor information has attracted considerable attention in the remote sensing community. While applications in land-cover observation benefit from information diversity, multisensor integration technique is confronted with many challenges, including inconsistent size of data, different data structures, uncorrelated physical properties, and scarcity of training data. In this article, an information fusion network, named interleaving perception convolutional neural network (IP-CNN), is proposed for integrating heterogeneous information and improving joint classification performance of hyperspectral image (HSI) and light detection and ranging (LiDAR) data. Specifically, a bidirectional autoencoder is designed to reconstruct hyperspectral and LiDAR data together, and the reconstruction process is trained with no dependence upon annotated information. Both HSI-perception constraint and LiDAR-perception constraint are imposed on multisource structural information integration. Accordingly, fused data are fed into a two-branch CNN for final classification. To validate the effectiveness of the model, the experiments were conducted using three datasets (i.e., Muufl Gulfport data, Trento data, and Houston data). The final results demonstrate that the proposed framework can significantly outperform state-of-the-art methods even with small-size training samples.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.