Abstract
In this paper, we present an approach for minimizing the computational complexity of the trained convolutional neural networks (ConvNets). The idea is to approximate all elements of a given ConvNet and replace the original convolutional filters and parameters (pooling and bias coefficients; and activation function) with an efficient approximations capable of extreme reductions in computational complexity. Low-complexity convolution filters are obtained through a binary (zero and one) linear programming scheme based on the Frobenius norm over sets of dyadic rationals. The resulting matrices allow for multiplication-free computations requiring only addition and bit-shifting operations. Such low-complexity structures pave the way for low power, efficient hardware designs. We applied our approach on three use cases of different complexities: 1) a "light" but efficient ConvNet for face detection (with around 1000 parameters); 2) another one for hand-written digit classification (with more than 180 000 parameters); and 3) a significantly larger ConvNet: AlexNet with million matrices. We evaluated the overall performance on the respective tasks for different levels of approximations. In all considered applications, very low-complexity approximations have been derived maintaining an almost equal classification performance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Neural Networks and Learning Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.