Abstract

Key Point Analysis (KPA) is one of the most essential tasks in building an Opinion Summarization system, which is capable of generating key points for a collection of arguments toward a particular topic. Furthermore, KPA allows quantifying the coverage of each summary by counting its matched arguments. With the aim of creating high-quality summaries, it is necessary to have an in-depth understanding of each individual argument as well as its universal semantic in a specified context. In this paper, we introduce a promising model, named Matching the Statements (MTS) that incorporates the discussed topic information into arguments/key points comprehension to fully understand their meanings, thus accurately performing ranking and retrieving best-match key points for an input argument. Our approach has achieved the 4th place in Track 1 of the Quantitative Summarization – Key Point Analysis Shared Task by IBM, yielding a competitive performance of 0.8956 (3rd) and 0.9632 (7th) strict and relaxed mean Average Precision, respectively.

Highlights

  • (2020) posed a question regarding the summarized ability of a small group of key points, and to some extent answered that question on their own by developing baseline models that could produce a concise bullet-like summary for the crowd-contributed arguments

  • Our approach1 has achieved the 4th place in Track 1 of the Quantitative Summarization – Key Point Analysis Shared Task by IBM, yielding a competitive performance of 0.8956 (3rd) and 0.9632 (7th) strict and relaxed mean Average Precision, respectively

  • ArgKP-2021 (Bar-Haim et al, 2020), the data set used in the Quantitative Summarization – Key Point Analysis Shared Task, is split into training and development sets with the ratio of 24 : 4. The training set is composed of 5583 arguments and 207 key points while those figures in the development set are 932 and 36

Read more

Summary

Related Work

Reimers and Gurevych (2019) proposed a sentence embedding method via fine-tuning BERT models. A standard approach for key points and arguments on natural language inference (NLI) datasets. More analysis is properly extracting their meaningful se- recent studies in learning sentence representation mantics. Our model stems from recent literatures followed the contrastive learning paradigm and that are based on siamese neural networks (Reimers achieved state-of-the-art performance on numerand Gurevych, 2019; Gao et al, 2021) to measure ous of benchmark tasks

Semantic matching
Problem definition
Methodology
Data preparation
Context integration layer
Statement encoding layer
Training
Experiment
Evaluation protocol
Differential Analysis
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call