Abstract

Listen, Attend and Spell (LAS) network is one of the end-to-end approaches for speech recognition, which does not require an explicit language model. It consists of two parts; the encoder part which receives acoustic features as inputs, and the decoder network which produces one character at a time step, based on the encoder output and an attention mechanism. Multi-layer recurrent neural networks (RNN) are used in both decoder and encoder parts. Hence, the LAS architecture can be simplified as one RNN for the decoder, and another RNN for the encoder. Their shapes and layer sizes can be different. In this work, we examined the performance of using multi RNNs for the encoder part. Our baseline LAS network uses an RNN with a hidden size of 256. We used 2 and 4 RNNs with hidden sizes of 128 and 64 for each case. The main idea behind the proposed approach is to focus the RNNs to different patterns (phonemes in this case) in the data. At the output of the encoder, their outputs are concatenated and fed to the decoder. TIMIT database is used to compare the performance of the mentioned networks, using phoneme error rate as the performance metric. The experimental results showed that proposed approach can achieve a better performance than the baseline network. However, increasing the number of RNNs does not guarantee further improvements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call