Abstract

Keyphrase generation (KG) aims at condensing the content from the source text to the target concise phrases. Though many KG algorithms have been proposed, most of them are tailored into deep learning settings with various specially designed strategies and may fail in solving the bias exposure problem. Reinforcement Learning (RL), a class of control optimization techniques, are well suited to compensate for some of the limitations of deep learning methods. Nevertheless, RL methods typically suffer from four core difficulties in keyphrase generation: environment interaction and effective exploration, complex action control, reward design, and task-specific obstacle. To tackle this difficult but significant task, we present RegRL-KG, including actor-critic based-reinforcement learning control and L1 policy regularization under the first principle of minimizing the maximum likelihood estimation (MLE) criterion by a sequence-to-sequence (Seq2Seq) deep learnining model, for efficient keyphrase generation. The agent utilizes an actor-critic network to control the generated probability distribution and employs L1 policy regularization to solve the bias exposure problem. Extensive experiments show that our method brings improvement in terms of the evaluation metrics on five scientific article benchmark datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call