Abstract

Inspired by the notion of a curriculum that allows human learners to acquire knowledge from easy to difficult materials, curriculum learning (CL) has been applied to many areas including Natural Language Processing (NLP). Most previous CL methods in NLP learn texts according to their lengths. We posit, however, that learning semantically similar texts is more effective than simply relying on superficial easiness such as text lengths. As such, we propose a new CL method that considers semantic dissimilarity as the complexity measure and a tree-structured curriculum as the organization method. The proposed CL method shows better performance than previous CL methods on a sentiment analysis task in an experiment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call