Abstract

Deep-learning-based methods are widely used in multisource remote-sensing image classification, and the improvement in their performance confirms the effectiveness of deep learning for classification tasks. However, the inherent underlying problems of deep-learning models still hinder the further improvement of classification accuracy. For example, after multiple rounds of optimization learning, representation bias and classifier bias are accumulated, which prevents the further optimization of network performance. In addition, the imbalance of fusion information among multisource images also leads to insufficient information interaction throughout the fusion process, thus making it difficult to fully utilize the complementary information of multisource data. To address these issues, a Representation-enhanced Status Replay Network (RSRNet) is proposed. First, a dual augmentation including modal augmentation and semantic augmentation is proposed to enhance the transferability and discreteness of feature representation, to reduce the impact of representation bias in the feature extractor. Then, to alleviate the classifier bias and maintain the stability of the decision boundary, a status replay strategy (SRS) is built to regulate the learning and optimization of the classifier. Finally, aiming to improve the interactivity of modal fusion, a novel cross-modal interactive fusion (CMIF) method is employed to jointly optimize the parameters of different branches by combining multisource information. Quantitative and qualitative results on three datasets demonstrate the superiority of RSRNet in multisource remote-sensing image classification, and its outperformance compared with other state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call