Abstract

Spectral representations have been introduced into deep convolutional neural networks (CNNs) mainly for accelerating convolutions and mitigating information loss. However, repeated domain transformations and complex arithmetic of commonly-used Fourier transform (DFT, FFT) seriously limit the applicability of spectral networks. In contrast, discrete cosine transform (DCT)-based methods are more promising owing to computations with merely real numbers. Hence in this work, we investigate the convolution theorem of DCT and propose a faster spectral convolution method for CNNs. First, we transform the input feature map and convolutional kernel into the frequency domain via DCT. We then perform element-wise multiplication between the spectral feature map and kernel, which is mathematically equivalent to symmetric convolution in the spatial domain but much cheaper than the straightforward spatial convolution. Since DCT only involves real arithmetic, the computational complexity of our method is significantly smaller than the traditional FFT-based spectral convolution. Besides, we introduce a network optimization strategy to suppress repeated domain transformations leveraging the intrinsically extended kernels. Furthermore, we present a partial symmetry breaking strategy with spectral dropout to mitigate the performance degradation caused by kernel symmetry. Experimental results demonstrate that compared with traditional spatial and spectral methods, our proposed DCT-based spectral convolution effectively accelerates the networks while achieving comparable accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call