Abstract

Scaling is one of the complex operations in the Residue Number System (RNS). This operation is necessary for RNS-based implementations of deep neural networks (DNNs) to prevent overflow. However, the state-of-the-art RNS scalers for special moduli sets consider the 2k modulo as the scaling factor, which results in a high-precision output with a high area and delay. Therefore, low-precision scaling based on multi-moduli scaling factors should be used to improve performance. However, low-precision scaling for numbers less than the scale factor results in zero output, which makes the subsequent operation result faulty. This paper first presents the formulation and hardware architecture of low-precision RNS scaling for four-moduli sets using new Chinese remainder theorem 2 (New CRT-II) based on a two-moduli scaling factor. Next, the low-precision scaler circuits are reused to achieve a high-precision scaler with the minimum overhead. Therefore, the proposed scaler can detect the zero output after low-precision scaling and then transform low-precision scaled residues to high precision to prevent zero output when the input number is not zero.

Highlights

  • Residue Number Systems (RNSs) have been used in different applications such as digital signal processing (DSP) [1] and deep learning systems [2] to provide low-power, high-speed and fault-tolerant computations [3]

  • Scaling is a difficult process, since the division operation in an RNS cannot be performed in parallel modular channels like multiplication and addition [4]

  • This scaling factor can significantly reduce the size of the operands, the limited 3n-bit dynamic range of the three-moduli set {2n − 1, 2n, 2n + 1} is not suitable for two-moduli scaling factors because in this three-moduli RNS system, the values of most numbers are less than the scaling factor (i.e., 2n (2n + 1)), which results in a zero output for the scaler, making the operation faulty

Read more

Summary

Introduction

Residue Number Systems (RNSs) have been used in different applications such as digital signal processing (DSP) [1] and deep learning systems [2] to provide low-power, high-speed and fault-tolerant computations [3]. The authors of [9] proposed two-moduli scaling based on 2n (2n + 1) as the scaling factor, which led to a low-precision output This scaling factor can significantly reduce the size of the operands, the limited 3n-bit dynamic range of the three-moduli set {2n − 1, 2n, 2n + 1} is not suitable for two-moduli scaling factors because in this three-moduli RNS system, the values of most numbers are less than the scaling factor (i.e., 2n (2n + 1)), which results in a zero output for the scaler, making the operation faulty.

Low-Precision Scaling with Two-Moduli Scaling Factor
Scaling Concept and CRT-II
General Formulations
Case Study
Performance Evaluation
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.