Abstract
Specialized hardware accelerators beyond von-Neumann, that offer processing capability in where the data resides without moving it, become inevitable in data-centric computing. Emerging non-volatile memories, like Ferroelectric Field-Effect Transistor (FeFET), are able to build compact Logic-in-Memory (LiM). In this work, we investigate the probability of error (Perror) in FeFET-based XNOR LiM, demonstrating the new trade-off between the speed and reliability. Using our reliability model, we present how Binarized Neural Networks (BNNs) can be proactively trained in the presence of XNOR-induced errors towards obtaining robust BNNs at the design time. Furthermore, leveraging the trade-off between Perror and speed, we present a run-time adaptation technique, that selectively trades-off Perror and XNOR speed for every BNN layer. Our results demonstrate that when a small loss (e.g., 1%) in inference accuracy could be accepted, our design-time and run-time techniques provide error-resilient BNNs that exhibit 75% and 50% (FashionMNIST) and 38% and 24% (CIFAR10) XNOR speedups, respectively.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have