Abstract

The classification of hyperspectral data using deep learning methods can obtain better results than the previous shallow classifiers, but deep learning algorithms have some limitations. These algorithms require a large amount of data to train the network, while also needing a certain amount of labeled data to fine-tune the network. In this paper, we propose a new hyperspectral data processing method based on transfer learning and the deep learning method. First, we use a hyperspectral data set that is similar to the target data set to pre-train the deep learning network. Then, we use the transfer learning method to find the common features of the source domain data and target domain data. Second, we propose a model structure that combines the deep transfer learning model to utilize a combination of spatial information and spectral information. Using transfer learning, we can obtain the spectral features. Then, we obtain several principal components of the target data. These will be regarded as the spatial features of the target domain data, and we use the joint features for the classifier. The data are obtained from a hyperspectral public database. Using the same amount of data, our method based on transfer learning and deep belief network obtains better classification accuracy in a shorter amount of time.

Highlights

  • Hyperspectral data are three-dimensional data that are composed of data obtained from hundreds of spectral channels [1]

  • As the performance of hyperspectral sensor hardware improves, the acquired hyperspectral data are improved in spectral resolution, and greatly improved in spatial resolution, while down sampling can cause a rapid loss in spatial detail [8]

  • We propose a deep belief network (DBN)-based transfer learning network model. data purpose is to use the DBN

Read more

Summary

Introduction

Hyperspectral data are three-dimensional data that are composed of data obtained from hundreds of spectral channels [1]. Based on the idea of transfer learning, we can realize that the new model requires a very small amount of sample data, and does not need to repeatedly train all of the deep learning networks This would achieve the goal of saving manpower, materials, and financial resources through the use of a smaller number of samples and the utilization of the transfer learning model. This approach reduces the network training time. The first step is to construct a deep learning network model using source-domain hyperspectral data with a large number of tags and perform training to extract the abstract features at all levels of the source domain data.

Algorithm
Transfer Learning
15: INPUT transfer layers T
Experimental Data
Influence of Network
Influence of Transfer Layer
The Number of Target Domain Samples
Experimental
Comparison with Other Methods
Confusion
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call