Abstract

Hyperdimensional computing (HDC) has emerged as an alternative lightweight learning solution to deep neural networks. A key characteristic of HDC is the great extent of parallelism that can facilitate hardware acceleration. However, previous hardware implementations of HDC seldom focus on GPU designs, which were also inefficient partly due to the complexity of accelerating HDC on GPUs. In this paper, we present OpenHD, a flexible and high-performance GPU-powered framework for automating the mapping of general HDC applications including classification and clustering to GPUs. OpenHD takes advantage of memory optimization strategies specialized for HDC, minimizing the access time to different memory subsystems, and removing redundant operations. We also propose a novel training method to enable data parallelism in HDC training. Our evaluation result shows that the proposed training rapidly achieves the target accuracy, reducing the required training epochs by 4×. With OpenHD, users can deploy GPU-accelerated HDC applications without domain expert knowledge. Compared to the state-of-the-art GPU-powered HDC implementation, our evaluation on NVIDIA Jetson TX2 shows that OpenHD is up to 10.5× and 314× faster for HDC-based classification and clustering, respectively. Compared with non-HDC classification and clustering on GPUs, OpenHD-based HDC is 11.7× and 53× faster at comparable accuracy. OpenHD is available at: <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/UCSD-SEELab/openhd</uri> .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call