Abstract

Multi-sensor image can provide supplementary information, usually leading to better performance in classification tasks. However, the general deep neural network-based multi-sensor classification method learns each sensor image separately, followed by a stacked concentrate for feature fusion. This way requires a large time cost for network training, and insufficient feature fusion may cause. Considering efficient multi-sensor feature extraction and fusion with a lightweight network, this paper proposes an attention-guided classification method (AGCNet), especially for multispectral (MS) and panchromatic (PAN) image classification. In the proposed method, a share-split network (SSNet) including a shared branch and multiple split branches performs feature extraction for each sensor image, where the shared branch learns basis features of MS and PAN images with fewer learn-able parameters, and the split branch extracts the privileged features of each sensor image via multiple task-specific attention units. Furthermore, a selective classification network (SCNet) with a selective kernel unit is used for adaptive feature fusion. The proposed AGCNet can be trained by an end-to-end fashion without manual intervention. The experimental results are reported on four MS and PAN datasets, and compared with state-of-the-art methods. The classification maps and accuracies show the superiority of the proposed AGCNet model.

Highlights

  • The rapid development of aerospace technology has generated a large number of remote sensing images from a variety of sensors [1,2,3,4], and the research interests in multisensor image classification is increasing, especially for multispectral (MS) and panchromatic (PAN) images

  • Seven state-of-the-art methods are compared to verify the effectiveness of the proposed attention-guided classification method (AGCNet), including extended multi-attribute profiles (EMAP) [55], convolutional auto-encoder (CAE) [18], recurrent neural network (RNN) [22], spatialchannel progressive fusion residual network (SCPF-ResNet) [49], convolutional neural network based on MS images (CNN-MS) [53], convolutional neural network based on PAN images (CNN-PAN) [53] and stacked fusion network (SFNet) [32]

  • In SFNet, the features of MS and PAN are extracted by CNN respectively, and the two features are concatenated for classification; the feature fusion strategy adopts the method in [32], and the parameter setting of CNN is still consistent with Table 1 for a fair comparison

Read more

Summary

Introduction

The rapid development of aerospace technology has generated a large number of remote sensing images from a variety of sensors [1,2,3,4], and the research interests in multisensor image classification is increasing, especially for multispectral (MS) and panchromatic (PAN) images. The MS and PAN images are usually captured using the optical satellites, and have different characteristics. The MS image consists of four spectral bands, and the PAN image has only one band. The PAN image has higher spatial resolution than that of MS image. For taking full use of the complementary spectral and spatial information, the processing methods of MS and PAN images are usually classified into two models: fusion-based classification model and classification-based fusion model. The fusion-based classification model is to pan-sharpen the MS image for improving its spatial resolution, followed by a classification process on the pan-sharpened

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.