Abstract

Convolution has been the core operation of modern deep neural networks. It is well known that convolutions can be implemented in the Fourier Transform domain. In this article, we propose to use binary block Walsh–Hadamard transform (WHT) instead of the Fourier transform. We use WHT-based binary layers to replace some of the regular convolution layers in deep neural networks. We utilize both one-dimensional (1D) and 2D binary WHTs in this article. In both 1D and 2D layers, we compute the binary WHT of the input feature map and denoise the WHT domain coefficients using a nonlinearity that is obtained by combining soft-thresholding with the tanh function. After denoising, we compute the inverse WHT. We use 1D-WHT to replace the 1 × 1 convolutional layers, and 2D-WHT layers can replace the 3 × 3 convolution layers and Squeeze-and-Excite layers. 2D-WHT layers with trainable weights can be also inserted before the Global Average Pooling layers to assist the dense layers. In this way, we can reduce the number of trainable parameters significantly with a slight decrease in trainable parameters. In this article, we implement the WHT layers into MobileNet-V2, MobileNet-V3-Large, and ResNet to reduce the number of parameters significantly with negligible accuracy loss. Moreover, according to our speed test, the 2D-FWHT layer runs about 24 times as fast as the regular 3 × 3 convolution with 19.51% less RAM usage in an NVIDIA Jetson Nano experiment.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.