Abstract

In this paper, we propose a novel neural network structure, namely feedforward sequential memory networks FSMN, to model long-term dependence in time series without using recurrent feedback. The proposed FSMN is a standard fully connected feedforward neural network equipped with some learnable memory blocks in its hidden layers. The memory blocks use a tapped-delay line structure to encode the long context information into a fixed-size representation as short-term memory mechanism which are somehow similar to the time-delay neural networks layers. We have evaluated the FSMNs in several standard benchmark tasks, including speech recognition and language modeling. Experimental results have shown that FSMNs outperform the conventional recurrent neural networks RNN while can be learned much more reliably and faster in modeling sequential signals like speech or language. Moreover, we also propose a compact feedforward sequential memory networks cFSMN by combining FSMN with low-rank matrix factorization and make a slight modification to the encoding method used in FSMNs in order to further simplify the network architecture. On the speech recognition Switchboard task, the proposed cFSMN structures can reduce the model size by 60% and speed up the learning by more than seven times while the model can still significantly outperform the popular bidirectional LSTMs for both frame-level cross-entropy criterion-based training and MMI-based sequence training.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call