Abstract

Vector Symbolic Architectures (VSA) can be used to encode complex objects, such as services and sensors, as hypervectors. Such hypervectors can be used to perform efficient distributed service discovery and workflow orchestration in communications constrained environments typical of the Internet of Things (IoT). In these environments, energy efficiency is of great importance. However, most hypervector representations use dense i.i.d element values and performing energy efficient hyperdimensional computing operations on such dense vectors is challenging. More recently, a sparse binary VSA scheme has been proposed based on a slot encoding having <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$M$</tex> slots with <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$B$</tex> bit positions per slot, in which only one bit per slot can be set. This paper shows for the first time that such sparse encoded hypervectors can be mapped into energy-efficient time-to-spike Spiking Neural Network (SNN) circuits, such that all the required VSA operations can be performed. Example VSA SNN circuits have been implemented in the Brian 2 SNN simulator, showing that all VSA binding, bundling, unbinding, and clean-up memory operations execute correctly. Based on these circuit implementations, estimates of the energy and processing time required to perform the different VSA operations on typical SNN neuromorphic devices are estimated. Recommendations for the design of future SNN neuromorphic processor hardware that can more efficiently perform VSA processing are also made.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call