Abstract

Today, there is a wide range of research cases about end-to-end trained and sequence-to-sequence models applied in the task of handwritten character recognition. Most of which mark the combination between convolutional neural network (CNN) as a feature extraction module and recurrent neural network (RNN) as a sequence-to-sequence module. Notably, the CNN layer can be fed with dynamic sizes of input images while the RNN layer can tolerate dynamic lengths of input data, which subsequently makes up the dynamic feature of the recognition models. However, when the number one priority is to minimize the training timespan, the models are to receive training data in the form of mini-batch, which requires resizing or padding images into an equal size instead of using original multiple-size pictures due to the fact that most of the deep learning frameworks (such as keras, tensorflow, caffe, etc.) only accept the same-size input and output in one mini-batch. Actually, this practice may lower the model dynamicity in the training process. So, the question is whether it might be a trade-off between the effectiveness (level of accuracy) and the time optimization of the model. In this paper, we will examine different impact of various padding and non-padding methods on the same model architecture for Japanese handwriting recognition before finally concluding on which method has the most reasonable training time but can produce an accuracy rate of up to 95%.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.