Abstract

We design a binary convolution operation circuit (BCOC) using a single-flux-quantum circuit for high-speed and energy-efficient neural network. The proposed circuit is used for binary convolution operations using a convolution kernel size of 3 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$ \times $</tex-math></inline-formula> 3, which accelerates the forward propagation process of a binary neural network (BNN). We analyze the binary convolution process and propose a bisection method for optimization. The BCOC is designed with a gate-level pipeline architecture and uses the bisection method for reduced number of pipeline stages. Thus, the circuit area of the BCOC is reduced by approximately 50% compared with that of a BCOC without the bisection method. We design the BCOC with 3270 Josephson junctions using a 10 kA/cm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> Nb process. The measurement results show that the BCOC can perform binary convolution operations with a kernel size of 3 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$ \times $</tex-math></inline-formula> 3. Compared to a CMOS circuit, BCOC increases the power efficiency by 3.9 times. In future research, we will build up a library of BNNs based on SFQ circuits to simulate various BNN structures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call