Abstract
In human interaction, effective communication relies on shared cognitive processes that facilitate the ability of individuals to comprehend the intended message of their interlocutors. Recent research in multiturn dialog generation seeks to emulate human-like responses by incorporating external knowledge into generative models to enhance language understanding. These models often utilize graphical representations of knowledge and employ graph neural networks (GNNs) to capture dialog semantics. However, sole reliance on external knowledge can fall short as human cognition integrates universal commonsense and personal knowledge, with the latter being derived from individual experiences and frequently disregarded. To remedy this, we propose GKA-GPT, a novel GNN-based approach that merges commonsense and personal knowledge into a comprehensive cognition graph to enhance the relevance and diversity of responses in multiturn dialog scenarios. Furthermore, GKA-GPT introduces a multigrained graphical knowledge aggregation mechanism for effective semantic information processing across various levels. Our experiments demonstrate that GKA-GPT outperforms existing baselines by generating more relevant and informative responses.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have