Abstract

With the arrival of the open-source RISC-V processor architecture, there is the chance to rethink Deep Neural Networks (DNNs) and information representation and processing. In this work, we will exploit the following ideas: i) reduce the number of bits needed to represent the weights of the DNNs using our recent findings and implementation of the posit number system, ii) exploit RISC-V vectorization as much as possible to speed up the format encoding/decoding, the evaluation of activations functions (using only arithmetic and logic operations, exploiting approximated formulas) and the computation of core DNNs matrix-vector operations. The comparison with the well-established architecture ARM Scalable Vector Extension is natural and challenging due to its closedness and mature nature. The results show how it is possible to vectorize posit operations on RISC-V, gaining a substantial speed-up on all the operations involved. Furthermore, the experimental outcomes highlight how the new architecture can catch up, in terms of performance, with the more mature ARM architecture. Towards this end, the present study is important because it anticipates the results that we expect to achieve when we will have an open RISC-V hardware co-processor capable to operate natively with posits.

Highlights

  • In the latest years, RISC-V has started to emerge as an open-source alternative CPU architecture [4, 7, 27]

  • We will exploit the following ideas: i) reduce the number of bits needed to represent the weights of the Deep Neural Networks (DNN) using our recent findings and implementation of the posit number system, ii) exploit RISC-V vectorization as much as possible to speed up the format encoding/decoding, the evaluation of activations functions and the computation of core DNNs matrix-vector operations

  • We presented the implementation of posit vector operations for DNNs using the RISC-V open-source hardware platform

Read more

Summary

Introduction

RISC-V has started to emerge as an open-source alternative CPU architecture [4, 7, 27]. Several real number representations have been proposed by industry and research such as Intel with Flexpoint [23, 26], Google with BFLOAT16 [8] and Facebook AI [22] Another very promising alternative to IEEE 32-bit Floating-point standard is the positTM number system, proposed by Gustafson [19]. Our ultimate goal is to extend RISC-V to be able to use a Posit Processing Unit (PPU) as a co-processor, by extending the processor instruction set architecture (ISA) While going towards this end, we can anyway gain great benefits from the posit format. Posit library developed and maintained by the authors) for RISC-V, following the same approach of our previous implementation of the ARM SVE vectorized operations [14].

Posit arithmetic
Format overview
Advantages over IEEE 32-bit Floats
No exponent bit case
Past achievements concerning posit-based DNNs
The RISC-V architecture
The RISC-V vector extension
The cppPosit library
Posits and RISC-V vectorization
Experimental results
Vectorization of posit encoding and decoding
Vectorized activation function benchmarks
Vectorized matrix-vector operation benchmarks
Analysis of results and discussions
Future work
Conclusions
Compliance with ethical standards
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call