Abstract

SummaryCompared with general question answering, Community Question Answering (CQA), which has been widely used in various scenarios like E‐commerce and is well welcomed. In order to answer the user's question precisely, many CQA models resort to external knowledge sources such as Wikipedia. The main challenge of the task is knowledge extraction and utilization. Different from the traditional method of designing task‐specific knowledge modules, we propose a graph prompt‐based learning method that directly steers the pretrained language model to solve CQA tasks. Multiple information sources are organized as graph prompts to guide the generation of the model, naturally leveraging the knowledge learned in the pretrained step. Based on the pretrained bidirectional and autoregressive transformers, a large‐scale language model, a comparable performance is achieved with less than 10% of the full‐finetuning time by only optimizing the graph prompt parameters. Experiments on two standard CQA datasets show that compared with traditional sequential initialized prompts, graph prompt achieves 20.47% and 14.89% increments in terms of BLEU and ROUGE‐L scores on quick finetuning and outperforms in few‐shot learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call