Abstract

Deep neural networks have been shown to be useful for the classification of hyperspectral images, particularly when a large amount of labeled data is available. However, we may not have enough reference data to train a deep neural network for many practical geospatial image analysis applications. To address this issue, in this paper, we propose to use a deep feature alignment neural network to carry out the domain adaptation, where the labeled data from a supplementary data source can be utilized to improve the classification performance in a domain where otherwise limited labeled data are available. In the proposed model, discriminative features for the source and target domains are first extracted using deep convolutional recurrent neural networks and then aligned with each other layer-by-layer by mapping features from each layer to transformed common subspaces at each layer. Experimental results are presented with two data sets. One of these data sets represents domain adaptation between images acquired at different times, while the other data set represents a very unique and challenging domain adaptation problem, representing source and target images that are acquired using different hyperspectral imagers that collect data from different viewpoints and platforms (a ground-based forward-looking street view of objects acquired at the close range and an aerial hyperspectral image). We demonstrate that the proposed deep learning framework enables the robust classification of the target domain data by leveraging information from the source domain.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call