Abstract

In recent years, many deep architectures have been proposed for handwritten text recognition. However, most of the previous deep models need large scale training data and a long training time to obtain good results. In this paper, we propose a novel deep learning method based on “stretching” the projection matrices of stacked feature learning models. We call the proposed method “stretching deep architectures” (or SDA). In the implementation of SDA, stacked feature learning models are first learned layer by layer, and then the stretching technique is applied on the weight matrices between successive layers. As the feature learning models can be efficiently optimized and the stretching results can be easily computed, the training of SDA is very fast and no back propagation is needed. We have tested SDA on handwritten digits recognition, Arabic subword recognition and English letter recognition tasks. Extensive experiments demonstrate that SDA performs not only better than shallow feature learning models, but also state-of-the-art deep learning models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call