Abstract

In this paper, we describe our system for Task 4 of SemEval 2020, which involves differentiating between natural language statements that conform to common sense and those that do not. The organizers propose three subtasks - first, selecting between two sentences, the one which is against common sense. Second, identifying the most crucial reason why a statement does not make sense. Third, generating novel reasons for explaining the against common sense statement. Out of the three subtasks, this paper reports the system description of subtask A and subtask B. This paper proposes a model based on transformer neural network architecture for addressing the subtasks. The novelty in work lies in the architecture design, which handles the logical implication of contradicting statements and simultaneous information extraction from both sentences. We use a parallel instance of transformers, which is responsible for a boost in the performance. We achieved an accuracy of 94.8% in subtask A and 89% in subtask B on the test set.

Highlights

  • Incorporating common sense in natural language understanding systems and evaluating whether a system has sense-making capability remains a fundamental question in the natural language processing field (Modi, 2017; Modi, 2016; Modi and Titov, 2014)

  • Subtask A Validation: Requires the model to choose which of the two statements S1 and S2 does not makes sense. We frame it as a binary classification problem and estimate the probability that the sentence is against common sense

  • Subtask B Explanation (Multi-Choice): Requires the model to choose the most appropriate of the three reasons {O1, O2, O3} to explain the against-common-sense statement. We formulate this as a multi-class classification problem and estimate the probability that the reason is the correct explanation

Read more

Summary

Introduction

Incorporating common sense in natural language understanding systems and evaluating whether a system has sense-making capability remains a fundamental question in the natural language processing field (Modi, 2017; Modi, 2016; Modi and Titov, 2014). One important difference between human and machine text understanding lies in the fact that humans have access to commonsense knowledge while processing text, which helps them to draw inferences about facts that are not mentioned in a text, but that is assumed to be common ground (Modi et al, 2017). Task 4 of semeval 2020 (Wang et al, 2020) is a common-sense validation and explanation task. It consists of classifying against common sense sentences from sentences that make sense. In subtask A, clearly sentence 1 is against common sense. Subtask B contains three options for reasons to explain why sentence 1 is against common sense. The implementation for our system is made available via Github

Problem Definition
Related Work
Initial Experimentation
Proposed Approach
Experimental Setup
Result Analysis
Error Analysis
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.