Abstract

In recent years, deep learning methods have been widely applied in remote sensing image classification tasks, providing valuable information for natural monitoring and spatial planning. In an actual application like this, acquiring massive labeled data for deep convolutional networks is costly and difficult especially in the situation that the data sources are diverse and the requirements are changing. Transfer learning methods have already shown superior performance on exploiting domain invariance features in existing data for deep network-based categorization tasks. However, the data imbalance between source and target domains may bring negative transfer and weaken the classifier’s ability. Moreover, it is still a difficult problem to extract object-level visual features among easy-mixed categories. In this context, Multi-adversarial Object-level Attention Network (MOAN) is proposed for partial transfer learning and selecting useful features. On the one hand, we present an improved object-level attention proposal network (OANet) for perceiving structural features of the main object in the picture, and weakening the unrelated regions. On the other hand, the extracted features are further enhanced by multi-adversarial framework in order to promote positive transfer, selecting and mapping valuable cross domain features from shared categories and suppressing others. This adversarial learning module can also generate pseudo tags for the samples in target domain so as to perceive integral visual signals, similar to the process in source domain. In addition, virtual adversarial training method is introduced in MOAN so as to regularize the model and maintain stability. Experimental analyses show that our MOAN can significantly promote positive transfer and restrain negative transfer in unsupervised classification problems. MOAN has good performances such as higher accuracies and lower loss values on several benchmark data sets.

Highlights

  • It is always a tough work to acquire sufficient labeled data for training complex models in a special application, like remote sensing scene classification

  • Different from conventional image recognition methods, object-level attention mechanism and multiadversarial learning model are embedded in the framework

  • With the ability to circumvent negative transfer by selecting out the irrelevant source data, Multi-adversarial Object-level Attention Network (MOAN) can obtain higher accuracies in target domain than existing domain adaption methods (e.g. RTN [61]) in un-supervised partial transfer learning problems. Different from those state-of-the-art adversarial domain adaption methods like Adversarial Discriminative Domain Adaptation (ADDA) [3] and Selective Adversarial Network (SAN) [7], in MOAN, object-level attentions are improved and optimized alternatively in order to completely perceive the objects in pictures

Read more

Summary

Introduction

It is always a tough work to acquire sufficient labeled data for training complex models in a special application, like remote sensing scene classification. The associate editor coordinating the review of this manuscript and approving it for publication was L. When the monitoring targets are changing in different periods and different regions. It is impracticable to label massive data for those effective but complicated deep networks. Transfer learning is regarded as a good solution for solving problems like this. As a typical transfer learning task, domain adaption has gained attention from many researchers in the past decade [1]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call