Abstract

Pruning studies up to date focused on uncovering a smaller network by removing redundant units, and fine-tuning to compensate accuracy drop as a result. In this study, unlike the others, we propose an approach to uncover a smaller network that is competent only in a specific task, similar to top-down attention mechanism in human visual system. This approach doesn’t require fine-tuning and is proposed as a fast and effective alternative of training from scratch when the network focuses on a specific task in the dataset. Pruning starts from the output and proceeds towards the input by computing neuron importance scores in each layer and propagating them to the preceding layer. In the meantime, neurons determined as worthless are pruned. We applied our approach on three benchmark datasets: MNIST, CIFAR-10 and ImageNet. The results demonstrate that the proposed pruning method typically reduces computational units and storage without harming accuracy significantly.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call