Abstract
Non-neural approaches to argument mining (AM) are often pipelined and require heavy feature-engineering. In this paper, we propose a neural end-to-end approach to AM which is based on dependency parsing, in contrast to the current state-of-the-art which relies on relation extraction. Our biaffine AM dependency parser significantly outperforms the state-of-the-art, performing at F1 = 73.5% for component identification and F1 = 46.4% for relation identification. One of the advantages of treating AM as biaffine dependency parsing is the simple neural architecture that results. The idea of treating AM as dependency parsing is not new, but has previously been abandoned as it was lagging far behind the state-of-the-art. In a thorough analysis, we investigate the factors that contribute to the success of our model: the biaffine model itself, our representation for the dependency structure of arguments, different encoders in the biaffine model, and syntactic information additionally fed to the model. Our work demonstrates that dependency parsing for AM, an overlooked idea from the past, deserves more attention in the future.
Highlights
People often hold different opinions about the same thing
Compared with the dependency parsing (DP) approach in Eger et al (2017), our model performs at a much higher performance rate. We argue that this is mainly due to the fact that our biaffine model is more powerful in modelling argument mining (AM)-style dependency structures, and due to other factors such as our dependency representation, which seems closer aligned with linguistic intuitions
We can see from these results that the DP approach for AM is able to achieve the best results currently known on this dataset
Summary
People often hold different opinions about the same thing. To help people efficiently understand different opinions and their reasoning embedded in arguments, it is necessary to develop systems that can automatically analyse structures of arguments. To analyse the structure of arguments, AM typically proposes four subtasks: 1) component segmentation, i.e., cutting a raw sequence into text spans that are either argumentative or nonargumentative segments (only argumentative segments are called argument components); 2) component classification, i.e., labelling each argument components with a tag in a pre-defined scheme, (e.g., “PREMISE” or “CLAIM”); 3) relation detection, i.e., deciding if two argument components are directly related; and 4) relation classification, i.e., categorizing a detected relation into a class in a predefined scheme, (e.g., “ATTACK” or “SUPPORT”) (Persing and Ng, 2016; Eger et al, 2017; Habernal and Gurevych, 2017; Stab and Gurevych, 2017; Lawrence and Reed, 2020). We see that the “rain” component acts as the premise, supporting the claim of “beautiful”
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have