Abstract

Todayโ€™s systems rely on sending all the data to the cloud and then using complex algorithms, such as Deep Neural Networks, which require billions of parameters and many hours to train a model. In contrast, the human brain can do much of this learning effortlessly. Hyperdimensional (HD) Computing aims to mimic the behavior of the human brain by utilizing high-dimensional representations. This leads to various desirable properties that other Machine Learning (ML) algorithms lack, such as robustness to noise in the system and simple, highly parallel operations. In this article, we propose ๐–ง๐—’๐–ฃ๐–ฑ๐–ค๐– , a HyperDimensional Computing system that is Robust, Efficient, and Accurate. We propose a Processing-in-Memory (PIM) architecture that works in a federated learning environment with challenging communication scenarios that cause errors in the transmitted data. ๐–ง๐—’๐–ฃ๐–ฑ๐–ค๐–  adaptively changes the bitwidth of the model based on the signal-to-noise ratio (SNR) of the incoming sample to maintain the accuracy of the HD model while achieving significant speedup and energy efficiency. Our PIM architecture is able to achieve a speedup of 28ร— and 255ร— better energy efficiency compared to the baseline PIM architecture for Classification and achieves 32 ร— speed up and 289 ร— higher energy efficiency than the baseline architecture for Clustering. ๐–ง๐—’๐–ฃ๐–ฑ๐–ค๐–  is able to achieve this by relaxing hardware parameters to gain energy efficiency and speedup while introducing computational errors. We show experimentally, HD Computing is able to handle the errors without a significant drop in accuracy due to its unique robustness property. For wireless noise, we found that ๐–ง๐—’๐–ฃ๐–ฑ๐–ค๐–  is 48 ร— more robust to noise than other comparable ML algorithms. Our results indicate that our proposed system loses less than 1% Classification accuracy, even in scenarios with an SNR of 6.64. We additionally test the robustness of using HD Computing for Clustering applications and found that our proposed system also looses less than 1% in the mutual information score, even in scenarios with an SNR under 7 dB, which is 57 ร— more robust to noise than K-means.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.