Abstract
Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing
Highlights
Welcome to the first Workshop on Commonsense Inference in Natural Language Processing, COIN
The model with the less amount of data (RACE, selected from deficiencies only) achieves equivalent accuracy to the entire reading comprehension (RACE) dataset, while using only half the amount of data. This underscores the importance of the abstract semantic norm task, as the related data selection process was effective in choosing examples that are directly related to deficiencies
We recognize that these improvements are very limited; to our surprise, Option Comparison Network model (OCN) pre-trained on Open Mind Common Sense (OMCS) or ATOMIC got significantly lower performance
Summary
Welcome to the first Workshop on Commonsense Inference in Natural Language Processing, COIN This workshop takes place for the first time and has a focus on research around modeling commonsense knowledge, developing computational models thereof, and applying commonsense inference methods in NLP tasks. Commonsense knowledge is often implicitly assumed, and a statistical model fails to learn it by this reporting bias (Gordon and van Durme, 2013) This critical difference of machine learning systems from human intelligence hurts performance when given examples outside the training data distribution (Gordon and van Durme, 2013; Schubert, 2015; Davis and Marcus, 2015; Sakaguchi et al, 2019). NLP systems have recently improved dramatically with contextualized word representations in a wide range of tasks (Peters et al, 2018; Radford et al, 2018; Devlin et al, 2019) These representations have the benefit of encoding context-specific meanings of words that are learned from large corpora. We can notice that all attributes that have high accuracy on MCScript have a high fit score
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have