Abstract

In the field of deep learning and image recognition, to improve the accuracy of recognition, the neural model with a complex structure is usually selected as the training model. However, the model with a complex structure has the disadvantages of a large amount of calculation and time-consuming, which limits the ability of deep CNN to deploy on resource-limited devices like mobile phones. This paper presented a new logo recognition approach that is based on knowledge distillation, improving the recognition accuracy of a small model by knowledge transfer. At the same time, a bias neural network is introduced to increase the recognition accuracy of the target class. In this paper, we select ResNet-50 as the cumbersome network, ResNet-18 and VGG16 as small networks respectively. With only knowledge distillation, the average recognition accuracy of ResNet-18 and VGG16 have increased by 8% and 11% respectively. With the proposed bias neural network, the recognition accuracy of ResNet-18 and VGG16 further increased by 2%–10%. The recognition accuracy of the target class is within 5% of that of ResNet-50, which means the bias neural network with fewer layers and parameters is able to reach nearly the same recognition performance as the cumbersome network on target logo classes. And the experiments validate that the bias neural network can improve the accuracy of bias classes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call