Abstract

Brain-inspired architectures are gaining increased attention, especially for edge devices to perform cognitive tasks utilizing its limited energy budget and computing resources. Hyperdimensional computing (HDC) paradigm is an emerging framework inspired by an abstract representation of neuronal circuits’ attributes in the human brain. That includes a fully holographic random representation, high-dimension vectors representing data, and robustness to uncertainty. The basic HDC pipeline consists of an encoding, training and comparison stages. The encoding algorithm maps different representations of inputs into a single class and stores them in the associative memory (AM) throughout the training stage. Later, during the inference stage, the similarity is computed between the query vector, which is encoded using the same encoding model, and the stored classes in the AM. HDC has shown promising results for 1D applications using less power, and lower latency than state-of-the-art digital neural networks (DNN). While in 2D applications, convolutional neural network (CNN) still achieves higher classification accuracy at the expense of more computations. In this paper, a comprehensive study on the HDC paradigm, main algorithms, and its implementation is presented. Moreover, the main state-of-the-art HDC architectures for 1D and 2D applications are highlighted. The article also analyzes two competing paradigms, namely, HDC and CNN, in terms of accuracy, complexity, and the number of operations. The paper concluded by highlighting challenges and recommendations for future directions on the HDC framework.

Highlights

  • A DVANCEMENTS in deep learning (DL) algorithms has outperformed conventional machine learning (ML) approaches in many applications: image classification, voice recognition [1], activity recognition [2] and object tracking [3]

  • Hyperdimensional vector (HD) PROPERTIES AND BASIC OPERATIONS the roots of Hyperdimensional computing (HDC) used to represent human memory, perception, and cognitive abilities are presented using the mathematical proprieties of HD space, which is built on rich linear algebra

  • Efficient hardware architectures with low computing complexity and memory requirements to allow low power and small form factor is of paramount importance

Read more

Summary

INTRODUCTION

A DVANCEMENTS in deep learning (DL) algorithms has outperformed conventional machine learning (ML) approaches in many applications: image classification, voice recognition [1], activity recognition [2] and object tracking [3]. Moving the data to/from the cloud is costly in terms of power and resources To this end, many criticaldomain applications such as health care and autonomous vehicles found the intensive ML algorithms impractical for real-time edge devices [6], [7]. It is crucial to design an efficient algorithm to perform the cognitive tasks and specialized hardware to provide high efficiency for edge devices. This article focuses on reviewing the state-of-the-art (SOTA) designs exploiting the HDC paradigm for classification tasks. It highlights the main operations of HDC and compares it to the SOTA computing paradigm for classification tasks, CNN.

BACKGROUND
HD PIPELINE FOR CLASSIFICATION TASKS
HD ENCODING STAGE AND ARCHITECTURE
COMMON ROOTS FOR HD ENCODING ALGORITHMS
GENERAL HD PROCESSOR
BENCHMARK APPLICATIONS
Findings
CONCLUSIONS AND RECOMMENDATIONS FOR
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.