Abstract

Scene classification on remote sensing imagery is usually based on supervised learning but collecting labelled data in remote sensing domains is expensive and time-consuming. Bag of Visual Words (BOVW) achieves great success in scene classification but there exist problems in domain adaptation tasks, such as the influence of background and the rotation transformation on BOVW representation, and the transfer of SVM parameters from the source domain to the target domain, which may lead to decreased cross-domain scene classification performance. In order to solve the three problems, Color-boosted saliency-guided rotation invariant bag of visual words representation with parameter transfer is proposed for cross-domain scene classification. The global contrast-based salient region detection method is combined with the color-boosted method to increase the accuracy of detected salient regions and reduce the effect of background information on the BOVW representation. The rotation invariant BOVW representation is also proposed by sorting the BOVW representation in each patch in order to decrease the effect of rotation transformation. The several best configurations in the source domain are also applied to the target domain so as to reduce the distribution bias between scenes in the source and target domain. These configurations deliver the top classification performance the optimal parameter in the target domain. The experimental results on two benchmark datasets confirm that the proposed method outperforms most previous methods in scene classification when instances in the target domain are limited. It is also proved that color boosted global contrast-based salient region detection (CBGCSRD) method, rotation invariant BOVW representation, and transfer of SVM parameters from the source to the target domain are all effective in improving the classification accuracy with 2.5%, 3.3%, and 3.1%. These three contributions may increase about 7.5% classification accuracy in total.

Highlights

  • With the development of remote sensing sensors, satellite image sensors can offer images with a spatial resolution at the decimeter level

  • We present a cross-domain scene classification method of high-resolution remote sensing images (HRIs) based on color-boosted saliency-guided rotation invariant Bag of Visual Words (BOVW) representation with parameter transfers, as shown in Figure 2, which can be divided into four main steps: (1) For one category in the source domain, the color boosted global contrast-based salient region detection (CBGCSRD) method has been applied to calculate the salient region for each instance

  • The CBGCSRD method increases the accuracy in all categories, except categories delivering poor classification performance in the salient region detection method

Read more

Summary

Introduction

With the development of remote sensing sensors, satellite image sensors can offer images with a spatial resolution at the decimeter level. We call these images high-resolution remote sensing images (HRIs). Despite enhanced resolution, these details often suffer from the spectral uncertainty problems stemming from an increase of the intra-class variance [1] and a decrease of the inter-class variance [2] Taking into account these characteristics, HRIs classification methods have evolved from pixel-oriented methods to object-oriented methods and have achieved precise object recognition performance [3,4,5]. In order to better acquire the semantic information in accordance with human cognition, scene classification aimed at automatically labeling an image from a set of semantic categories [7], has been proposed with remarkable success in image interpretation

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call