Abstract

The self-encoder is a typical unsupervised deep learning algorithm. In the field of unsupervised learning, it is very popular with researchers. Therefore, in view of the shortage of labeled training samples, the convolution kernel of a typical convolutional neural network is set by experience, and the network structure is fixed and it is difficult to re-learn later. This paper combines the convolutional neural network and the automatic encoder, and proposes a multi-based the method of integrated network structure to extract the features of the image for recognition. First, the SAE pre-trained CNN model convolution kernel is used to pre-train based on the classic CNN structure. Secondly, input and process image data of different scales to extract image space and spectral features respectively. Then, construct multiple channels, and use different scale filters and sampling intervals for different channels. Finally, after one layer of down sampling, the feature maps obtained from multiple channels are input into the fully connected layer, and after a hidden layer, the features finally used for classification are obtained. Experimental results show that the proposed method uses sparse automatic coding for pre-training time efficiency increased by 50%, and can further improve the recognition accuracy, the highest recognition rate reached 0.985.

Highlights

  • Deep learning is a brand new branch of machine learning and a powerful core driver in the field of artificial intelligence [1], [2]

  • After one layer of down-sampling, the feature maps obtained from multiple channels are input into the fully connected layer, and after a hidden layer, the features used for classification are obtained

  • The value of the loss function of the independent component analysis (ICA) model declined during training. These results show that the model in this paper is much more efficient than the Convolutional neural networks (CNNs) model and ICA model

Read more

Summary

INTRODUCTION

Deep learning is a brand new branch of machine learning and a powerful core driver in the field of artificial intelligence [1], [2]. A twocolumn CNN is proposed for feature extraction and classifier training, combined with image style, and semantic attributes Experiments show that this method is better than the previous method. Use a linear filter to perform convolution operation on the input image or the previously obtained features, and calculate through a nonlinear function to obtain the feature map output by this layer. BASIC STRUCTURE OF CONVOLUTIONAL NEURAL NETWORK Convolutional neural network is a deep feed-forward artificial neural network and one of the representative algorithms of deep learning [40] It can automatically learn multi-layer features directly from images and has very good representation capabilities. After extracting high-level features through the convolutional layer, pooling layer, and relu layer, the fully connected layer is usually placed at the end of the network Neurons in this layer are completely dependent on all activations in the previous layer. The loss function is defined as the negative logarithm of the probability of the softmax function

AUTOMATIC ENCODER
SAE PRE-TRAINING CONVOLUTION KERNEL
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.