Integral neural networks adopt continuous integral operators instead of conventional discrete convolutional operations to perform deep learning tasks. As this integral operator is the continuous representation of the regular convolutional operation, it is not suitable for representing the separable convolutional operations widely deployed on mobile devices. To address this issue, a separable integral layer composed of a depth-wise integral operator and a point-wise integral operator is proposed in this paper to represent discrete depth-wise and point-wise convolutional operations in continuous manner. According to the fabric units of five classical convolutional neural networks(NIN, VGG11, GoogleNet, ResNet18, ResNet50), we design five kinds of separable integral blocks(SIBs) to encapsulate separable integral layers in different manner. Using the proposed SIBs as basic blocks, a family of lightweight separable integral neural networks(SINNs) are constructed and deployed on resource-constrained mobile devices. SINNs have the characteristics of integral neural networks, i.e., performing structural pruning without fine-tuning, and also inherit the advantages of separable convolutional operations, i.e., reducing the computational cost while keeping a competitive performance. The experimental results show that SINNs achieve the similar performance with the state-of-the-art integral neural networks(INNs), while reducing the computational cost to up to 1/1.79 times that of INN(1.74× fewer parameters than INN using ResNet101 backbone framework) on ImageNet dataset. The code will be released at https://github.com/ljh3832-ccut/SINN.
Read full abstract