Abstract

Language model pretraining has yielded significant results in diverse natural language processing tasks. RoberTa, an efficient method for pretraining self-supervised NLP systems, is a good example. Our hypothesis in this paper is that the performance of Spatial Role Labeling (SpRL) can be improved by combining static word vectors and bags of features with RoberTa vectors. Furthermore, we show that our method is successful in several SpRL datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call