Abstract

Variational recurrent autoencoder model (VRAE) is an appealing technique for capturing the variabilities underlying complex sequential data, which is realized by introducing high-level latent random variables as hidden states. Existing models suffer from the well-known ‘posterior collapse’ problem, meaning that a powerful autoregressive decoder equipped in the model itself could capture all the variabilities and hence leave the latent variables learning nothing from the data. From the perspective of model training, the posterior collapse problem can result in a very low Kullback–Leibler divergence (KL-divergence) value, which means the posteriors of the latent variables tend to be just the priors. In this paper, we address this problem by proposing a Bayesian variational recurrent neural network (BVRNN) model, in which two additional decoders are added into the original VRAE. These extra decoders can force the latent variables to learn meaningful knowledge during the training process. We conduct experiments on MNIST and Fashion-MNIST dataset. The experimental results show that the proposed model outperforms several baseline models. We further adapt the proposed model to a very challenging task in natural language processing, namely Named-Entity Recognition (NER). Experimental results show that our model is competitive to the state-of-the-art models on NER.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.