Abstract

Abstract. Land cover classification is essential basic information and key parameters for environmental change research, geographical and national monitoring, and sustainable development planning. Deep learning can automatically and multi-level extract the features of complex features, which has been proven to be an effective method for information extraction. However, one of the major challenges of deep learning is its poor interpret-ability, which makes it difficult to understand and explain the reasoning behind its classification results. This paper proposes a deep cross-modal coupling model (CMCM) for integrating semantic features and visual features. The representation of knowledge map is indicatively introduced into remote sensing image classification. Compared to previous studies, the proposed method provides accurate descriptions of the complex semantic objects within a complex land cover environment. The results showed that the integration of semantic knowledge improved the accuracy and interpret-ability of land cover classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call