Abstract

The goal of many tasks in the realm of sequence processing is to map a sequence of input data to a sequence of output labels. Long short-term memory (LSTM), a type of recurrent neural network (RNN), equipped with connectionist temporal classification (CTC) has been proved to be one of the most suitable tools for such tasks. With the aid of CTC, the existence of per-frame labeled sequences are no longer necessary and it suffices to only knowing the sequence of labels. However, in CTC, only a single state is assigned to each label and consequently, LSTM would not learn the intra-label relationships. In this paper, we propose to remedy this weakness by increasing the number of states assigned to each label and actively modeling such intra-label transitions. On the other hand, the output of a CTC network usually corresponds to the set of all possible labels along with a blank. One of the uses of blank is in the recognition of multiple consecutive identical labels. Assigning more than one state to each label, we can also decode consecutive identical labels without resorting to the blank. We investigated the effect of increasing the number of sub-labels with/without blank on the recognition rate of the system. We performed experiments on two printed and handwritten Arabic datasets. Our experiments showed that while on simple tasks a model without blank may converge faster, on real-world complex datasets use of blank significantly improves the results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call