Abstract

A method for data set formation has been developed to verify the ability of pre-trained models to learn transitivity dependencies. The generated data set was used to test the quality of learning the transitivity dependencies in the task of natural language inference (NLI). Testing of a data set with a size of 10,000 samples (MultiNLI) used to test the RoBerta model. It was found that this model is good at studying transitive dependencies in the task of logical inference because all samples from the formed dataset were correctly classified as belonging to the class similar, contradiction and neutral. It was also investigated that in the task of logical inference, the class similarity is more directed than contradiction and neutral. Because if the premise and hypothesis in the data set are swapped, the accuracy of the RoBerta model decreases by a factor of 2.97,1.17,1.26 for the similar (0.98→0.33) , neutral (0.90→0.77) , and contradiction (0.98→0.78) classes, respectively. The study iteration time is 0.0028 seconds, so only half of the data set requires approximately 84 hours of collection. This research is relevant because the ability of natural language models to explore such dependencies as transitivity, which is not explicitly specified in the training data set, is an important element of the model’s ability to generalize. It was found that RoBerta’s model is good at studying transitive dependencies in the logical inference task because it correctly classified belonging to the class similar, contradiction, and neutral on all samples from the generated data set.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call