Abstract

As more and more artificial intelligence capabilities are deployed onto resource-constrained devices, designers explore several techniques in an effort to boost energy efficiency. Two techniques are quantization and voltage scaling. Quantization aims to reduce the memory footprint, as well as the memory accesses. Therefore, this article explores the resilience of convolutional neural networks to SRAM-based errors and analyzes the relative energy impact of quantization and voltage scaling, when used separately and jointly. -Theocharis Theocharides, University of Cyprus -Muhammad Shafique, Technische Universität Wien.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.