Abstract
Compressed Sensing using $\ell_1$ regularization is among the most powerful and popular sparsification technique in many applications, but why has it not been used to obtain sparse deep learning model such as convolutional neural network (CNN)? This paper is aimed to provide an answer to this question and to show how to make it work. We first demonstrate that the commonly used stochastic gradient decent (SGD) and variants training algorithm is not an appropriate match with $\ell_1$ regularization and then replace it with a different training algorithm based on a regularized dual averaging (RDA) method. RDA was originally designed specifically for convex problem, but with new theoretical insight and algorithmic modifications (using proper initialization and adaptivity), we have made it an effective match with $\ell_1$ regularization to achieve a state-of-the-art sparsity for CNN compared to other weight pruning methods without compromising accuracy (achieving 95\% sparsity for ResNet18 on CIFAR-10, for example).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.