Abstract

Convolutional neural networks (CNNs) usually come with numerous parameters and thus are not convenient for some situations, such as when the storage space is limited. Low-rank decomposition is one effective way for network compression or compaction. However, the current methods are far from theoretical optimal compression performance because the low-rankness of the commonly trained convolution filter sets is limited because of the versatility of convolution filters. We propose a novel compact design for convolutional layers with spatial transformation for achieving a much lower-rank form. The convolution filters in our design are generated using a predefined Tucker product form, followed by learnable individual spatial transformations on each filter. The low-rank (Tucker) part lowers the parameter capacity while the transformation part enhances the feature representation capacity. We validate our proposed approach on an image classification task. Our approach focuses on compressing parameters while also improving accuracy. We perform experiments on the MNIST, CIFAR10, CIFAR100, and ImageNet datasets. On the ImageNet dataset, our approach outperforms low-rank based state-of-the-arts by 2% to 6% in top-1 validation accuracy. Furthermore, our approach outperforms a series of low-rank-based state-of-the-arts on various datasets. The experiments validate the efficacy of our proposed method. Our code is available at https://github.com/liubc17/low_rank_compact_transformed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call