Abstract

SummaryCompared with general question answering, Community Question Answering (CQA), which has been widely used in various scenarios like E‐commerce and is well welcomed. In order to answer the user's question precisely, many CQA models resort to external knowledge sources such as Wikipedia. The main challenge of the task is knowledge extraction and utilization. Different from the traditional method of designing task‐specific knowledge modules, we propose a graph prompt‐based learning method that directly steers the pretrained language model to solve CQA tasks. Multiple information sources are organized as graph prompts to guide the generation of the model, naturally leveraging the knowledge learned in the pretrained step. Based on the pretrained bidirectional and autoregressive transformers, a large‐scale language model, a comparable performance is achieved with less than 10% of the full‐finetuning time by only optimizing the graph prompt parameters. Experiments on two standard CQA datasets show that compared with traditional sequential initialized prompts, graph prompt achieves 20.47% and 14.89% increments in terms of BLEU and ROUGE‐L scores on quick finetuning and outperforms in few‐shot learning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.