Abstract

AbstractIn the last 5 years, language representation models, such as BERT and GPT‐3, based on transformer neural networks, have led to enormous progress in natural language processing (NLP). One such NLP task is commonsense reasoning, where performance is usually evaluated through multiple‐choice question answering benchmarks. Till date, many such benchmarks have been proposed, and ‘leaderboards’ tracking state‐of‐the‐art performance on those benchmarks suggest that transformer‐based models are approaching human‐like performance. Because these are commonsense benchmarks, however, such a model should be expected to generalize, that is, at least in aggregate, should not exhibit excessive performance loss across independent commonsense benchmarks regardless of the specific benchmark on (the training set of) which it has been fine‐tuned. In this article, we evaluate this expectation by proposing a methodology and experimental study to measure the generalization ability of language representation models using a rigorous and intuitive metric. Using five established commonsense reasoning benchmarks, our experimental study shows that the models do not generalize well, and may be (potentially) susceptible to issues such as dataset bias. The results therefore suggest that current performance on benchmarks may be an over‐estimate, especially if we want to use such models on novel commonsense problems for which a ‘training’ dataset may not be available, for the language representation model, to fine‐tune on.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call