Abstract

Large convolutional networks have achieved impressive classification performances recently. To achieve better performance, convolutional network tends to develop into deeper. However, the increase of network depth causes the linear growth of computational complexity, but cannot bring equivalent increase to the classification accuracy. To alleviate this inconsistence, we propose a cascading approach to accelerate the classification of very deep convolutional neural network. By exploiting the entropy metric to analyze the statistic differences of basic networks between the correctly and mistakenly classified images, we can assign the easily distinguished images to the shallow networks for reducing the computational complexity, and leave the difficultly classified images to the deep networks for maintaining the overall performance. Besides, the proposed cascaded networks can take advantage of the complementarity between different networks, which may boost the classification accuracy compared to the deepest network. We perform the experiments using residual networks of different depths on cifar100 dataset, on the condition of obtaining the similar accuracy to the deepest network, the results show that our cascaded ResNet32-ResNet110 and cascaded ResNet32-ResNet164 can reduce the computation time by 48.6% and 44.3% compared to ResNet110 and ResNet164, respectively. And the cascaded ResNet32-ResNet110-ResNet164 can reduce the computation time by 85.4% compared to the very deep Resnet1001.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call