In this paper, we study the problem of feature extraction for knowledge transfer between multiple remotely sensed images in the context of land-cover classification. Several factors such as illumination, atmospheric, and ground conditions cause radiometric differences between images of similar scenes acquired on different geographical areas or over the same scene but at different time instants. Accordingly, a change in the probability distributions of the classes is observed. The purpose of this work is to statistically align in the feature space an image of interest that still has to be classified (the target image) to another image whose ground truth is already available (the source image). Following a specifically designed feature extraction step applied to both images, we show that classifiers trained on the source image can successfully predict the classes of the target image despite the shift that has occurred. In this context, we analyze a recently proposed domain adaptation method aiming at reducing the distance between domains, Transfer Component Analysis, and assess the potential of its unsupervised and semisupervised implementations. In particular, with a dedicated study of its key additional objectives, namely the alignment of the projection with the labels and the preservation of the local data structures, we demonstrate the advantages of Semisupervised Transfer Component Analysis. We compare this approach with other both linear and kernel-based feature extraction techniques. Experiments on multi- and hyperspectral acquisitions show remarkable cross- image classification performances for the considered strategy, thus confirming its suitability when applied to remotely sensed images.
Read full abstract