Abstract

We propose a new teacherstudent framework (TSF)-based knowledge transfer method, in which knowledge in the form of dense flow across layers is distilled from a pre-trained "teacher" deep neural network (DNN) and transferred to another "student" DNN. In the case of distilled knowledge, multiple overlapped flow-based items of information from the pre-trained teacher DNN are densely extracted across layers. Transference of the densely extracted teacher information is then achieved in the TSF using repetitive sequential training from bottom to top between the teacher and student DNN models. In other words, to efficiently transmit extracted useful teacher information to the student DNN, we perform bottom-up step-by-step transfer of densely distilled knowledge. The performance of the proposed method in terms of image classification accuracy and fast optimization is compared with those of existing TSF-based knowledge transfer methods for application to reliable image datasets, including CIFAR-10, CIFAR-100, MNIST, and SVHN. When the dense flow-based sequential knowledge transfer scheme is employed in the TSF, the trained student ResNet more accurately reflects the rich information of the pre-trained teacher ResNet and exhibits superior accuracy to the existing TSF-based knowledge transfer methods for all benchmark datasets considered in this study.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.