Abstract
Handwritten text recognition, i.e., the conversion of scanned handwritten documents into machine-readable text, is a complex exercise due to the variability and complexity of handwriting. A common approach in handwritten text recognition consists of a feature extraction step followed by a recognizer. In this paper, we propose a novel DNN architecture for handwritten text recognition that extracts discrete representation from the input text-line image. The proposed model is constructed of an encoder–decoder network with an added quantization layer which applies a dictionary of representative vectors to discretize the latent variables. The dictionary and the network parameters are trained jointly through the k-means algorithm and back propagation, respectively. The performance of the suggested model is evaluated through conducting extensive experiments on five datasets, analyzing the effect of discrete representation on handwriting recognition. The results demonstrate that the use of feature discretization improves the performance of deep handwriting text recognition models when compared to the conventional DNN models with continuous representation. Specifically, the character error rate is decreased by 22% and 21.1% on IAM and ICFHR18 datasets, respectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.