Abstract

Abstract. Scene classification based on multi-source remote sensing image is important for image interpretation, and has many applications, such as change detection, visual navigation and image retrieval. Deep learning has become a research hotspot in the field of remote sensing scene classification, and dataset is an important driving force to promote its development. Most of the remote sensing scene classification datasets are optical images, and multimodal datasets are relatively rare. Existing datasets that contain both optical and SAR data, such as SARptical and WHU-SEN-City, which mainly focused on urban area without wide variety of scene categories. This largely limits the development of domain adaptive algorithms in remote sensing scene classification. In this paper, we proposed a multi-modal remote sensing scene classification dataset (MRSSC) based on Tiangong-2, a Chinese manned spacecraft which can acquire optical and SAR images at the same time. The dataset contains 12167 images (optical 6155 and 6012 for optical and SAR, resp.) of seven typical scenes, namely city, farmland, mountain, desert, coast, lake and river. Our dataset is evaluated by state-of-theart domain adaptation methods to establish a baseline with average classification accuracy of 79.2%. The MRSSC dataset will be released freely for the educational purpose and can be found at China Manned Space Engineering data service website (http://www.msadc.cn). This dataset will fill the gap between remote sensing scene classification between different image sources, and paves the way for a generalized image classification model for multi-modal earth observation data.

Highlights

  • With the rapid growth of remote sensing satellites, massive multisource remote sensing data will show an explosive trend of growth

  • To test the effectiveness of the proposed modal remote sensing scene classification dataset (MRSSC) dataset for scene classification, experiments are carried out using eight baseline domain adaptation methods, which can be divided into three main categories: discrepancy-based, adversarial-based and others

  • N = number of test images, which is 700 in this paper i = the index of each class r = number of classes, which is 7 in this paper xii = number of correct predictions in each class In Table 6, the overall accuracies of the eight baseline domain adaptation (DA) methods are summarized, which Last OA is the overall accuracy of the test set after 10 epochs of training, and Best OA is the performance of the best model obtained during the 10 epochs of training

Read more

Summary

INTRODUCTION

With the rapid growth of remote sensing satellites, massive multisource remote sensing data will show an explosive trend of growth. The classification model trained from one data source, for example optical remote sensing images, cannot always transferred successfully to other data source, such as SAR images, owing to the domain difference caused by different sensors. To tackle this problem, the state-of-the-art studies apply domain adaptation (DA), as one of the transfer learning techniques, to solve this problem. We evaluate our dataset using eight state-of-the-art domain adaptation methods to establish a baseline for future research This dataset will fill the gap between remote sensing scene classification between different image sources, and paves the way for a generalized image classification model for multi-modal earth observation data

Introduction to Data Sources
Data Acquisition and Processing
Remote Sensing Scene Category Selection
Characteristics of MRSSC Dataset
Domain Adaptation Baseline Algorithms
Experimental Settings
Overall Accuracy
Method
Scene Classification Confusion Matrix
CONCLUSIONS
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.