Abstract
Given the remarkable text generation capabilities of pre-trained language models, impressive results have been realized in graph-to-text generation. However, while learning from knowledge graphs, these language models are unable to fully grasp the structural information of the graph, leading to logical errors and missing key information. Therefore, an important research direction is to minimize the loss of graph structural information during the model training process. We propose a framework named Edge-Optimized Multi-Level Information refinement (EMLR), which aims to maximize the retention of the graph’s structural information from an edge perspective. Based on this framework, we further propose a new graph generation model, named TriELMR, highlighting the comprehensive interactive learning relationship between the model and the graph structure, as well as the importance of edges in the graph structure. TriELMR adopts three main strategies to reduce information loss during learning: (1) Knowledge Sequence Optimization; (2) EMLR Framework; and (3) Graph Activation Function. Experimental results reveal that TriELMR exhibits exceptional performance across various benchmark tests, especially on the webnlgv2.0 and Event Narrative datasets, achieving BLEU-4 scores of 66.5% and 37.27%, respectively, surpassing the state-of-the-art models. These demonstrate the advantages of TriELMR in maintaining the accuracy of graph structural information.Graphical abstract
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have