Abstract

The RNN encoder-decoder structures have critical problems in generating meaningful responses. Variational autoencoders (VAE) combined with hierarchical RNNs have emerged as a powerful framework for conversation modeling, as the latent variables can encode the high-level information (topics, tones, sentiments, etc.) in conversations. On the other hand, BERT, one of the latest deep pre-trained language representation models, has achieved the remarkable state of the art across a wide range of tasks in natural language processing. However, BERT has not yet been investigated in a conversation generation task. In this paper, we explore different BERT-empowered conversation modeling approaches by incorporating BERT, RNNs, and VAEs. Moreover, BERT can be used either with weights fixed as feature extraction module or with weights updated and optimized for a specific task. In this paper, we demonstrate that simply using fixed pre-trained BERT as part of the model without further finetuning is powerful enough for generating better responses in terms of fluency, grammar, and semantic coherency. Fine-tuning can achieve the comparable results. This paper sets new baselines for conversation generation task and we are the first to demonstrate the success of BERT in conversation modeling.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.