Abstract

Document grounded conversation (DGC) aims to generate informative responses when talking about a document. It is normally formulated as a sequence-to-sequence (Seq2seq) learning problem, which directly maps source sequences, i.e., the context and background documents, to the target sequence, i.e., the response. These responses are normally used as the final output without further polishing, which may suffer from the global information loss owing to the auto-regression paradigm. To tackle this problem, some researches designed two-pass generation to improve the quality of responses. However, these approaches lack the capability of distinguishing inappropriate words in the first pass, which may maintain the erroneous words while rewrite the correct ones. In this paper, we design a scheduled error correction network (SECN) with multiple generation passes to explicitly locate and rewrite the erroneous words in previous passes. Specifically, a discriminator is employed to distinguish erroneous words which are further revised by a refiner. Moreover, we also apply curriculum learning with reasonable learning schedule to train our model from easy to hard conversations, where the complexity is measured by the number of decoding passes. We conduct comprehensive experiments on a public document grounded conversation dataset, Wizard-of-Wikipedia, and the results demonstrate significant promotions over several strong benchmarks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.