The integrated knowledge graph summarization model improves summary performance by combining text features and entity features. However, the model still has the following shortcomings: the knowledge graph data used introduce data noise that deviates from the original text semantics; and the text and knowledge graph entity features cannot be fully integrated. To address these issues, a knowledge graph summarization model integrating attention alignment and momentum distillation (KGS-AAMD) is proposed. The pseudo-targets generated by the momentum distillation model serve as additional supervision signals during training to overcome data noise. The attention-based alignment method lays the foundation for the subsequent full integration of text and entity features by aligning them. Experimental results on two public datasets, namely CNN / Daily Mail and XSum, show that KGS-AAMD surpasses multiple baseline models and ChatGPT in terms of the quality of summary generation, exhibiting significant performance advantages.
Read full abstract