Abstract

This work aims to compress the deep neural networks (DNNs) by learning the low rank and sparsity characteristics of weight filters. Most of the compression models consider the low rank or sparse property of weights independently, however, they cannot capture the features accurately and thus become inefficient. It is revealed that the low rank component of the weights also tends to be sparse, therefore, in this work, we propose an extreme neural network compression method by embedding sparsifying operations after the low rank decomposition. In addition, a global sparse component is proposed to compensate the performance loss of the compressed model. Experiments are also included to prove the benefits of the proposed algorithm.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.