Abstract

The high computational complexity, memory footprints, and energy requirements of machine learning models, such as Artificial Neural Networks (ANNs), hinder their deployment on resource-constrained embedded systems. Most state-of-the-art works have considered this problem by proposing various low bit-width data representation schemes and optimized arithmetic operators’ implementations. To further elevate the implementation gains offered by these individual techniques, there is a need to cross-examine and combine these techniques’ unique features. This paper presents ExPAN(N)D, a framework to analyze and ingather the efficacy of the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Posit</i> number representation scheme and the efficiency of <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">fixed-point</i> arithmetic implementations for ANNs. The Posit scheme offers a better dynamic range and higher precision for various applications than IEEE 754 single-precision floating-point format. However, due to the dynamic nature of the various fields of the Posit scheme, the corresponding arithmetic circuits have higher critical path delay and resource requirements than the single-precision-based arithmetic units. Towards this end, we propose a novel <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Posit to fixed-point</i> converter for enabling high-performance and energy-efficient hardware implementations for ANNs with minimal drop in the output accuracy. We also propose a modified Posit-based representation to store the trained parameters of a network. With the proposed Posit to fixed-point converter-based designs, we provide multiple design points with varying accuracy-performance trade-offs for an ANN. For instance, compared to the lowest power dissipating Posit-only accelerator design, one of our proposed designs results in 80% and 48% reduction in power dissipation and LUT utilization respectively, with marginal increase in classification error for Imagenet dataset classification using VGG-16.

Highlights

  • Machine learning algorithms have become an essential factor in various modern applications, such as scene perception and image classification [1]–[3]

  • Many recent works have considered this problem to define various optimization techniques to reduce the complexity of machine learning models, such as Artificial Neural Networks (ANN)

  • We propose ExPAN(N)D framework for Exploring the joint use of Posit and Fixed Point (FxP) representations for Designing efficient ANNs

Read more

Summary

Introduction

Machine learning algorithms have become an essential factor in various modern applications, such as scene perception and image classification [1]–[3]. The massively parallel architectures, such as Graphics Processing Units (GPUs), and cloud-based computing have been traditionally used to train these algorithms. To utilize these trained machine learning models on resource-constrained embedded systems, the computational complexity and storage requirements of these algorithms must be reduced. The utilization of regime field provides a better dynamic range to Posit number scheme. With an appropriate configuration of exponent size and total bit-width, a posit number can be formed to act as an IEEE 754-2008 compliant floating-point number. Posit arithmetic supports only one rounding mode that is round to nearest, ties to even

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call