Abstract

CNNs are the state-of-the-art for many computer vision problems, including object detection. However, reducing the computational complexity of a CNN is a key prerequisite to deploying state-of-the-art deep learning networks in many low power embedded real-time robotic applications. Pruning has been shown to be an effective method to reduce the computational complexity of a Convolutional Neural Network (CNN) while maintaining accuracy. In the literature, accuracy lost through pruning is recovered with extended fine-tuning of the pruned network at the end of the pruning procedure, but further pruning is not conducted after extended fine-tuning. In this work we modify the pruning procedure to incorporate extended fine-tuning at intervals during the procedure to maintain network accuracy while pruning further than would otherwise be possible. We evaluate this procedure on a small scale custom object detection dataset and the more challenging standard PASCAL VOC dataset. On the former the new procedure achieves a 19.6× reduction in FLOPS for a drop of only 0.4% mean Average Precision (mAP) while the latter achieves only a 1.8× reduction in FLOPS for a drop of 0.8% mAP. The results indicate differing levels of parameter redundancy in the initial networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call