Abstract

Curriculum learning (CL) has demonstrated its potential to fine-tune pre-trained models for the response selection task. However, existing CL works exhibit a significant reliance on fixed data buckets and a pre-ordained learning schedule. This paper seeks to transcend these constraints by proposing a novel dynamic curriculum learning (DCL) algorithm that employs: (1) a new difficulty measurement that amalgamates linguistic features with model confidence, and (2) an automatic uncertainty-based learning scheduler with a dynamic data sampling policy. This unique approach enables the model to autonomously assemble rational data batches and adjust the curriculum stage by itself. Empirical experiments show that our DCL method reveals its superior performance over three strong baselines and the competitive method on three multi-turn dialogue datasets. The in-depth analysis demonstrates our approach dynamically adapts the CL process, which in turn makes effective use of existing data and alleviates data scarcity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call