Abstract

Each argument begins with a conclusion, which is followed by one or more premises supporting the conclusion. The warrant is a critical component of Toulmin's argument model; it explains why the premises support the claim. Despite its critical role in establishing the claim's veracity, it is frequently omitted or left implicit, leaving readers to infer. We consider the problem of producing more diverse and high-quality warrants in response to a claim and evidence. To begin, we employ BART [1] as a conditional sequence tosequence language model to guide the output generation process. On the ARCT dataset [2], we fine-tune the BART model. Second, we propose the Multi-Agent Network for Warrant Generation as a model for producing more diverse and high-quality warrants by combining Reinforcement Learning (RL) and Generative Adversarial Networks (GAN) with the mechanism of mutual awareness of agents. In terms of warrant generation, our model generates a greater variety of warrants than other baseline models. The experimental results validate the effectiveness of our proposed hybrid model for generating warrants.

Highlights

  • The term "argument mining" refers to the process of automatically identifying and extracting the structure of inference and reasoning expressed as natural language arguments [3]

  • Published work [6], different methods used to generate warrants, To begin, models for identifying warrant-relevant fragments presented, including a Lexical Chain with Multi-Head Attention, a RST-based algorithm, and a Causality-based Selection algorithm. Each of these models is followed by a reinforcement learning Reinforcement Learning (RL) generation process

  • Another model for warrant generation employs RST in conjunction with a Multi Head Attention Mechanism generator enhanced by reinforcement learning

Read more

Summary

Introduction

The term "argument mining" refers to the process of automatically identifying and extracting the structure of inference and reasoning expressed as natural language arguments [3]. Published work [6], different methods used to generate warrants, To begin, models for identifying warrant-relevant fragments presented, including a Lexical Chain with Multi-Head Attention, a RST-based algorithm, and a Causality-based Selection algorithm. Each of these models is followed by a reinforcement learning RL generation process. To further advance the state of the art, we anticipate that combining BART's pretrained language model with a multiagent mechanism will enable the generation of more diverse and high-quality warrants In this expanded version, we include additional knowledge information, such as the target, keywords and topic. We included additional results and a discussion based on a new model

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call