Abstract

Abstract. Land cover classification is essential basic information and key parameters for environmental change research, geographical and national monitoring, and sustainable development planning. Deep learning can automatically and multi-level extract the features of complex features, which has been proven to be an effective method for information extraction. However, one of the major challenges of deep learning is its poor interpret-ability, which makes it difficult to understand and explain the reasoning behind its classification results. This paper proposes a deep cross-modal coupling model (CMCM) for integrating semantic features and visual features. The representation of knowledge map is indicatively introduced into remote sensing image classification. Compared to previous studies, the proposed method provides accurate descriptions of the complex semantic objects within a complex land cover environment. The results showed that the integration of semantic knowledge improved the accuracy and interpret-ability of land cover classification.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.