Abstract

The strength of long short-term memory neural networks (LSTMs) that have been applied is more located in handling sequences of variable length than in handling geometric variability of the image patterns. In this paper, an end-to-end convolutional LSTM neural network is used to handle both geometric variation and sequence variability. The best results for LSTMs are often based on large-scale training of an ensemble of network instances. We show that high performances can be reached on a common benchmark set by using proper data augmentation for just five such networks using a proper coding scheme and a proper voting scheme. The networks have similar architectures (convolutional neural network (CNN): five layers, bidirectional LSTM (BiLSTM): three layers followed by a connectionist temporal classification (CTC) processing step). The approach assumes differently scaled input images and different feature map sizes. Three datasets are used: the standard benchmark RIMES dataset (French); a historical handwritten dataset KdK (Dutch); the standard benchmark George Washington (GW) dataset (English). Final performance obtained for the word-recognition test of RIMES was 96.6%, a clear improvement over other state-of-the-art approaches which did not use a pre-trained network. On the KdK and GW datasets, our approach also shows good results. The proposed approach is deployed in the Monk search engine for historical-handwriting collections.

Highlights

  • Convolutional neural networks (CNNs) [1] and long shortterm memory networks (LSTM) [2] and its variants [3, 4] have recently achieved impressive results [5,6,7]

  • We explore the possibilities of exploiting the success of current CNN/LSTM approaches, using several methods at the level of linguistics and labeling systematics, as well as an ensemble method

  • The convolutional recurrent neural network is an end-toend trainable system presented in [26]. It outperforms the plain CNN in four aspects: (1) it does not need precise annotation for each character and it can handle a string of characters for the word image; (2) it works without a strict preprocessing phase, hand-crafted features, or component localization/segmentation; (3) it benefits from the state preservation capability of a recurrent neural network (RNN) in order to deal with character sequence; (4) it does not depend on the width of the word image

Read more

Summary

Introduction

Convolutional neural networks (CNNs) [1] and long shortterm memory networks (LSTM) [2] and its variants [3, 4] have recently achieved impressive results [5,6,7]. This exceptional performance comes, at the cost of having an ensemble of, e.g., 100–2000 recognizers [8]. The model is made up of two distinct neural network varieties, it can be trained integrally using one loss function

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call