Abstract
6G-Enabled Internet of Things (IoT) is about to open a new era of Internet of Everything (IoE). It creates favorable conditions for new application services. The human-machine dialogue system, one of the most important forms of human–machine interaction, is expected to replace mobile applications in the future. This article proposes a dialogue generation scheme named background knowledge-aware dialogue generation model with pretrained encoders (BKADGPE). Dialogue generation, which takes the context as input and response as output, is a sequence-to-sequence (Seq2Seq) task. Instead of only generating the response based on the previous sequence of utterances, background knowledge-aware dialogue generation is also relying on background knowledge documents. This is because people often communicate based on their background knowledge. This article divides it into two tasks: 1) a knowledge selection task and 2) a response generation task. One of the latest language pretraining models, a lite bidirectional encoder representations from transformers (ALBERT), is applied as the encoder. In the knowledge selection task, ALBERT adds the linear layer and softmax layer to predict the content-related knowledge span. In the response generation task, the ALBERT after fine-tuning through the knowledge selection task adds the left-context-only transformer with a copy mechanism to incorporate background knowledge span into the generated response. Empirical studies on the HOLL-E dataset show that the result of BKADGPE is better than the related works.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.