Abstract

Energy efficiency continues to be the core design challenge for artificial intelligence (AI) hardware designers. In this paper, we propose a new AI hardware architecture targeting Internet of Things applications. The architecture is founded on the principle of learning automata, defined using propositional logic. The logic-based underpinning enables low-energy footprints as well as high learning accuracy during training and inference, which are crucial requirements for efficient AI with long operating life. We present the first insights into this new architecture in the form of a custom-designed integrated circuit for pervasive applications. Fundamental to this circuit is systematic encoding of binarized input data fed into maximally parallel logic blocks. The allocation of these blocks is optimized through a design exploration and automation flow using field programmable gate array-based fast prototypes and software simulations. The design flow allows for an expedited hyperparameter search for meeting the conflicting requirements of energy frugality and high accuracy. Extensive validations on the hardware implementation of the new architecture using single- and multi-class machine learning datasets show potential for significantly lower energy than the existing AI hardware architectures. In addition, we demonstrate test accuracy and robustness matching the software implementation, outperforming other state-of-the-art machine learning algorithms.This article is part of the theme issue ‘Advanced electromagnetic non-destructive evaluation and smart monitoring’.

Highlights

  • Advances in sensing devices have enabled a shift towards the fourth industrial revolution [1]

  • This will ensure that the parameters can be transferred exactly over to the application-specific integrated circuit (ASIC) design and as such will be especially important once we develop hardware-centric Tsetlin machine algorithms which further depart from the software implementation

  • The method leverages the natural ability of an ensemble of Tsetlin automata to learn from a set of training data

Read more

Summary

Introduction

Advances in sensing devices have enabled a shift towards the fourth industrial revolution [1]. The modular electronic neurons require arithmetic-heavy circuits, such as multiply–accumulate (MAC) units The number of these units can quickly grow with more inputs and added complexity of the learning problem [7]. Given such a scale of arithmetic complexity, achieving required energy efficiency and performance in NNs can be daunting, which is exacerbated further by the large volume of data generated by IoT devices [8]. The logic-based structure of Tsetlin machines provides opportunities for energy-efficient AI hardware design This will require addressing the major challenges of the systematic architecture allocation of low-level resources as well as parametric tuning and data binarization, which cannot be achieved by using high-level synthesis or hardware-assisted acceleration tools.

Machine learning using learning automata
Proposed hardware architecture
Performance and energy efficiency
33.3 MHz training time
Machine learning experiments
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call