Keyphrase generation is one of the most fundamental tasks in natural language processing (NLP). Most existing works on keyphrase generation mainly focus on using holistic distribution to optimize the negative log-likelihood loss, but they do not directly manipulate the copy and generating spaces, which may reduce the generability of the decoder. Additionally, existing keyphrase models are either unable to determine the dynamic numbers of keyphrases or produce the number of keyphrases implicitly. In this article, we propose a probabilistic keyphrase generation model from copy and generating spaces. The proposed model is built upon the vanilla variational encoder-decoder (VED) framework. On top of VED, two separate latent variables are adopted to model the distribution of data within the latent copy and generating spaces, respectively. Specifically, we adopt a von Mises-Fisher (vMF) distribution to obtain a condensed variable for modifying the generating probability distribution over the predefined vocabulary. Meanwhile, we utilize a clustering module, which is designed to promote Gaussian Mixture learning and subsequently extract a latent variable for the copy probability distribution. Moreover, we utilize a natural property of the Gaussian mixture network and use the number of filtered components to determine the number of keyphrases. The approach is trained based on latent variable probabilistic modeling, neural variational inference, and self-supervised learning. Experiments on social media and scientific article datasets outperform the state-of-the-art baselines in generating accurate predictions and controllable keyphrase numbers.
Read full abstract