Abstract
Deep convolutional neural network (CNN) assisted classification of images is one of the most discussed topic in recent years. Continuously innovation of neural network architectures is making it more correct and efficient every day. But training a neural network from scratch is a very time-consuming and requires a lot of sophisticated computational equipments and power. So, using some pre-trained neural network as feature extractor for any image classification task or “Transfer Learning” is very popular approach that saves time and computational power for practical use of CNN s. In this paper an efficient way of building full model from any pre-trained model with high accuracy and low memory, is proposed using Knowledge Distillation. Using the distilled knowledge of the last layer of pretrained networks is passes through fully-connected layers with different hidden layers, followed by Softmax layer. The accuracies of student networks are mildly lesser than the whole models, but accuracy of student models clearly indicates the accuracy of the real network. In this way the best number of hidden layers for dense layer for that pretrained network with best accuracy and no-over-fitting can be found with less time. Here VGG16 and VGG19 (pre-trained on “ImageNet” dataset) is tested on chest X-rays (pneumonia and COVID-19). For finding the best number of hidden layers total it saves nearly 44 minutes for VGG19 and 36 minutes 37 seconds for VGG16 feature extractor based CNN model.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have