Abstract

End-to-end speech recognition systems have been successfully implemented and have become competitive replacements for hybrid systems. A common loss function to train end-to-end systems is connectionist temporal classification (CTC). This method maximizes the log likelihood between the feature sequence and the associated transcription sequence. However there are some weaknesses with CTC training. The main weakness is that the training criterion is different from the test criterion, since the training criterion is log likelihood, while the test criterion is word error rate. In this work, we introduce a new lattice based transcription loss function to address this deficiency of CTC training. Compared to the CTC function, our new method optimizes the model directly using the transcription loss. We evaluate this new algorithm in both a small speech recognition task, the Wall Street Journal (WSJ) dataset and a large vocabulary speech recognition task, the Switchboard dataset. Results demonstrate that our algorithm outperforms a traditional CTC criterion, and achieving 7% WER relative reduction. In addition, we compare our new algorithm to some discriminative training algorithms, such as state-level minimum Bayes risk (SMBR) and minimum word error (MWE), showing that our algorithm is more convenient and contains more varieties for speech recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call