Abstract

The state-of-the-art machine teaching techniques overestimate the ability of learners in grasping a complex concept. On one side, since a complicated concept always contains multiple fine-grained concepts, students can only grasp parts of them during a practical teaching process. On the other side, because a single teaching sample contains unequal information in terms of various fine-grained concepts, learners accept them at different levels. Thus, with more and more complicated dataset, it is challenging for us to rethink the machine teaching frameworks. In this work, we propose a new machine teaching framework called Attentive Machine Teaching (AMT). Specifically, we argue that a complicated concept always consists of multiple features, which we call fine-grained concepts. We define attention to represent the learning level of a learner in studying a fine-grained concept. Afterwards, we propose AMT, an adaptive teaching framework to construct the personalized optimal teaching dataset for learners. During each iteration, we estimate the workers' ability with Graph Neural Network (GNN) and select the best sample using a pool-based searching approach. For corroborating our theoretical findings, we conduct extensive experiments with both synthetic datasets and real datasets. Our experimental results verify the effectiveness of AMT algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call