Abstract

The effective classification methods of the small target objects in the no-fly zone are of great significance to ensure safety in the no-fly zone. But, due to the differences of the color and texture for the small target objects in the sky, this may be unobvious, such as the birds, unmanned aerial vehicles (UAVs), and kites. In this paper, we introduced the higher layer visualizing feature extraction method based on the hybrid deep network model to obtain the higher layer feature through combining the Sparse Autoencoder (SAE) model, the Convolutional Neural Network (CNN) model, and the regression classifier model to classify the different types of the target object images. In addition, because the sample numbers of the small sample target objects in the sky may be not sufficient, we cannot obtain much more local features directly to realize the classification of the target objects based on the higher layer visualizing feature extraction; we introduced the transfer learning in the SAE model to gain the cross-domain higher layer local visualizing features and sent the cross-domain higher layer local visualizing features and the images of the target-domain small sample object images into the CNN model, to acquire the global visualizing features of the target objects. Experimental results have shown that the higher layer visualizing feature extraction and the transfer learning deep networks are effective for the classification of small sample target objects in the sky.

Highlights

  • Due to the influence of the weather factors, camouflage, and other factors, it is often difficult to classify the target objects in the sky timely and accurately

  • The Sparse AutoEncoder (SAE) model [5] acts as a kind of unsupervised feature learning method; through the unmarked sample data rehabilitation training, the higher layer visualizing feature extraction for the target objects can be effectively generalized into the small sample target object image applications [6,7,8]

  • Experiments verified that the algorithms this paper proposed can well classify the small sample target objects, and the classification performance comparisons between the transfer learning and non-transfer learning based on the Sparse Autoencoder (SAE) higher layer visualizing feature extraction model in the small sample target objects are realized

Read more

Summary

Introduction

Due to the influence of the weather factors, camouflage, and other factors, it is often difficult to classify the target objects in the sky timely and accurately. The Sparse AutoEncoder (SAE) model [5] acts as a kind of unsupervised feature learning method; through the unmarked sample data rehabilitation training, the higher layer visualizing feature extraction for the target objects can be effectively generalized into the small sample target object image applications [6,7,8]. This paper proposed a kind of SAE higher layer visualizing feature extraction method for the small sample target objects in the sky. Experiments verified that the algorithms this paper proposed can well classify the small sample target objects, and the classification performance comparisons between the transfer learning and non-transfer learning based on the SAE higher layer visualizing feature extraction model in the small sample target objects are realized. The effectiveness and accuracy of the new algorithm is verified by dividing the target object image into the training set and test set

Local feature learning based on the SAE model
U 66666664
Convolutional layer
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call