Abstract
Hyperdimensional (HD) computing is a brain inspired learning method that is widely employed in resource-constrained applications such as the internet of things (IoT) regarding its lightweight computation. Although HD computing has high efficiency in the IoT applications, it suffers from high computation due to the large vector size. Thus, several studies are proposed to speedup and increase the efficiency of HD computing. In this work, we propose a method to process HD computing called PartialHD. Our method divides a long hypervector into multiple partial vectors, and process each vector separately. In the retraining phase, this method improves accuracy up to 1.93%. Moreover, by employing our proposed method in the retraining and inference, only a few partial vectors participate and this causes the computational overhead reduces in these phases. The evaluation shows that our method processes on average 22.2% and 36.23% of the entire hypervectors with negligible accuracy loss in inference and retraining, respectively. Furthermore, we propose two general architectures (light-weight and high-speed), which accelerate partial vector computations on different FPGA platforms. The result shows that the light-weight architecture can accelerate HD computing on resource-constrained FPGAs such as Artix while the high-speed architecture has 4.72× higher throughput than light-weight architecture.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.