Abstract
In the multi-turn dialogue scenario, users commonly encounter challenges with pronoun referents and information omission, leading to semantically incomplete representations. These issues contribute to textual incoherence, as unclear referents and missing components hinder the semantic understanding of the spoken representations of text by machines. Currently, scholars frequently resort to multiple rounds of dialogue rewriting to address the semantic challenges posed by the machine comprehension of semantically missing texts with pronoun referents and information omissions. However, existing dialogue-rewriting methods often suffer from low precision and high latency in handling such texts. To mitigate these shortcomings, this paper proposes a Transformer-based dialogue-rewriting model that utilizes pointer extraction. The method leverages a Transformer pre-training model to effectively extract the potential semantic features of text and extract the key information of text by a pointer address. By extracting keywords and appropriately replacing or inserting text, the model restores referents and missing information. The experimental findings on an open-source Chinese multi-turn dialogue-rewriting dataset demonstrate the effectiveness of the proposed method in improving both the accuracy and efficiency of rewriting compared with existing methods. Specifically, the ROUGR-1 value increased by 2.9%, while the time consumption decreased by 50% compared with the benchmark method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.