Abstract
Recently, secure neural network (NN) inference, a combination of homomorphic encryption (HE) and NN, has attracted much attention. Nevertheless, a large number of computations, mainly brought by the HE scheme, form the bottleneck in real-time applications. In this article, we present a hardware accelerator on a field-programmable gate array (FPGA) for the homomorphic convolution layer (HomConvL), which is the most computation-intensive part of the HE-based secure inference. First, we propose a new HomConvL algorithm called packed rotations at inputs (PaRotI), which is suitable for hardware implementation for its inherent high parallelism and low complexity with acceptable noise growth and moderate resource consumption. Then, we present three highly parallel architectures for different parameter sets and application scenarios of state-of-the-art HomConvL algorithms. The new architectures are implemented on a Xilinx VCU110 FPGA board, and the experimental results demonstrate that our designs can achieve 15.31– <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$19.46\times $ </tex-math></inline-formula> speedups compared with the software implementations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.