Abstract

In the realm of remote sensing image classification and detection, deep learning has emerged as a highly effective approach, owing to the remarkable advancements in object perception models and the availability of abundant annotated data. Nevertheless, for specific remote sensing image scene classification tasks, obtaining diverse and large amounts of data remains a daunting challenge, leading to limitations in the applicability of trained models. Consequently, researchers are increasingly focusing on optimal data utilization and interpretability of learning. Drawing inspiration from brain neural perception research, researchers have proposed novel approaches for deeper interpretation and optimization of deep learning models from diverse perspectives. In this paper, we present a brain-inspired network optimization model for remote sensing image scene classification, which considers both shape and texture features and reconstructs feature scaling of data through feature bias estimation. The model achieves greater robustness through complementary training. We evaluate our optimized model on general datasets by integrating it into an existing benchmark method and compare its performance with the original approach. Our results demonstrate that the proposed model is highly effective, with dynamically reconstructed data leading to a significant enhancement of model learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call