Generative text summaries often suffer from factual inconsistencies, where the summary deviates from the original text. This significantly reduces their usefulness. To address this issue, we propose a novel method for improving the factual accuracy of Chinese summaries by leveraging dependency graphs. Our approach involves analyzing the input text to build a dependency graph. This graph, along with the original text, is then processed by separate models: a Relational Graph Attention Neural Network for the dependency graph and a Transformer model for the text itself. Finally, a Transformer decoder generates the summary. We evaluate the factual consistency of the generated summaries using various methods. Experiments demonstrate that our approach improves about 7.79 points compared to the baseline Transformer model on the Chinese LCSTS dataset using ROUGE-1 metric, and 4.48 points in the factual consistency assessment model StructBERT.
Read full abstract