Abstract

In recent years, researchers have proposed many deep learning algorithms for data representation learning. However, most deep networks require extensive training data and a lot of training time to obtain good results. In this paper, we propose a novel deep learning method based on stretching deep architectures that are composed of stacked feature learning models. Hence, the method is called “stretching deep architectures” (SDA). In the feedforward propagation of SDA, feature learning models are firstly stacked and learned layer by layer, and then the stretching technique is applied to map the last layer of the features to a high-dimensional space. Since the feature learning models are optimized effectively, and the stretching technique can be easily calculated, the training of SDA is very fast. More importantly, the learning of SDA does not need back-propagation optimization, which is quite different from most of the existing deep learning models. We have tested SDA in visual texture perception, handwritten text recognition, and natural image classification applications. Extensive experiments demonstrate the advantages of SDA over traditional feature learning models and related deep learning models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call