Abstract

Knowledge distillation (KD) transfers discriminative knowledge from a large and complex model (known as teacher) to a smaller and faster one (known as student). Existing advanced KD methods, limited to fixed feature extraction paradigms that capture teacher's structure knowledge to guide the training of the student, often fail to obtain comprehensive knowledge to the student. Toward this end, in this article, we propose a new approach, synchronous teaching knowledge distillation (STKD), to integrate online teaching and offline teaching for transferring rich and comprehensive knowledge to the student. In the online learning stage, a blockwise unit is designed to distill the intermediate-level knowledge and high-level knowledge, which can achieve bidirectional guidance of the teacher and student networks. Intermediate-level information interaction provides more supervisory information to the student network and is useful to enhance the quality of final predictions. In the offline learning stage, the STKD approach applies a pretrained teacher to further improve the performance and accelerate the training process by providing prior knowledge. Trained simultaneously, the student learns multilevel and comprehensive knowledge by incorporating online teaching and offline teaching, which combines the advantages of different KD strategies through our STKD method. Experimental results on the SVHN, CIFAR-10, CIFAR-100, and ImageNet ILSVRC 2012 real-world datasets show that the proposed method achieves significant performance improvements compared with the state-of-the-art methods, especially with satisfying accuracy and model size. Code for STKD is provided at https://github.com/nanxiaotong/STKD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call