Abstract

This work comprehensively analyzes the error robustness of hyperdimensional computing (HDC) by using FeFET-based local multiply and global accumulate computation-in-memory. HDC trains and infers with hypervectors (HVs). Symmetric or asymmetric errors, which simulate read-disturb and data-retention errors of FeFET, are injected into Item memory and/or Associative memory before/after or during training in various cases when solving European language classification task. The detailed error injection reveals that HDC is acceptable for both symmetric and asymmetric error rate up to 10−1. Based on the detailed analysis of error robustness, training window slide (TWS) improves the error robustness against memory errors by removing data which contain different amount of errors. TWS shows 10 times higher error robustness. In addition, parallelization of HV encoding in training achieves fast training with up to 10 000 parallelism while maintaining the inference accuracy.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.