Abstract
Hyperdimensional (HD) computing is a mathematical framework, inspired by neuroscience, which can be used to represent many machine learning (ML) problems. Data is first encoded into high dimensional space (on the order of 103 or 104 dimensions) to create hypervectors. HD computing combines these hypervectors to create a model used for inference. However, due to the high dimensionality of the hypervectors, inference in HD is very expensive, especially when it runs on embedded devices with limited resources. One naive approach to improve the efficiency of HD computing is to simply lower the dimensionality of hypervectors, which comes with a corresponding loss in accuracy. However, if the data is compressed intelligently, we can reduce the dimensionality of an HD model without sacrificing accuracy. To that end, we propose CompHD, a novel approach for compressing HD models while maintaining the accuracy of the original model. CompHD utilizes the mathematics of high-dimensional spaces to compress hypervectors into shorter vectors while maintaining the information of full length hypervectors. We evaluated the efficiency of CompHD on a variety of applications. Our results show that CompHD can reduce model size by an average of 69.7%, resulting in a execution time speed up of 4.1 × and improving energy efficiency by 74% while maintaining the accuracy of the original model. This enables more low powered IoT devices to utilize HD computing for ML problems.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.