Abstract

With the abundance of conversations happening everywhere, dialogue summarization plays an increasingly important role in the real world. However, dialogues inevitably involve many personal pronouns, which hinder the performance of existing dialogue summarization models. This work proposes a framework named WHORU to inject external personal pronoun resolution (PPR) information into abstractive dialogue summarization models. A simple and effective PPR method for the dialogue domain is further proposed to reduce time and space consumption. Experiments demonstrated the superiority of the proposed methods. More importantly, WHORU achieves new SOTA results on SAMSum and AMI datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call