Abstract

For deployment on an embedded processor for distracted driver classification, the model should satisfy the demand for both high accuracy, real-time inference, and limited storage resources. Conventional deep CNN models such as VGG, ResNet, DenseNet, often aim for high accuracy, making their model heavy for an embedded system with limited memory space and computing resources. In contrast, lightweight models are greatly compressed but at a significant sacrifice of accuracy. To bridge this gap, we propose an instance-specific multi-teacher knowledge distillation model (IsMt-KD) to learn more accurate, speedy, and lightweight CNNs for distracted driver posture classification. Specifically, in multi-teacher knowledge distillation, most of the current approaches either randomly select a teacher model and apply the prediction of such teacher model as the soft-label or allocate an equal weight to every teacher model and average all the predictions of the teachers as the soft label. In this paper, we observe that, when facing the same instance, the outputs of different teachers vary greatly, in which some teachers can predict it right whereas the others may give pretty high probabilities to the irrelevant classes. Thus, it is inappropriate to set fixed weights or the same weights for teachers. To this end, a simple yet effective instance-specific teacher grading module is designed to dynamically assign weights to teacher models based on individual instances. In this way, we can dynamically distill the knowledge from multiple teachers by considering both instance-specific high-level and instance-specific intermediate-level information. Our extensive experimental results on AUC and StateFarm datasets, and our implementation on edge hardware platforms including HUAWEI MediaPad c5 and Nvidia Jetson TX2, verify the effectiveness and feasibility of our approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.