Abstract

Recurrent neural network (RNN) has been widely used for sequential learning which has achieved a great success in different tasks. The temporal convolutional network (TCN), a variant of one-dimensional convolutional neural network (CNN), was also developed for sequential learning in presence of sequence data. RNN and TCN typically captures long-term and short-term features in temporal or spatial domain, respectively. This paper presents a new sequential learning, called the convolutional recurrent network (CRN), which fulfills TCN as an encoder and RNN as a decoder so that the global semantics as well as the local dependencies are simultaneously characterized from sequence data. To facilitate the interpretation and robustness in neural models, we further develop the stochastic modeling for CRN based on variational inference. The merits of CNN and RNN are then incorporated in inference of latent space which sufficiently produces a generative model for sequential prediction. Experiments on language model shows the effectiveness of stochastic CRN when compared with the other sequential machines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call