Abstract

Two papers were selected from the digital hardware related sessions of ISSCC 2019. Both papers propose energy-efficient architectures and circuits for accelerating deep neural networks inference and training. The first paper, by Kim et al. , demonstrates a mobile processor that can perform reinforcement learning at high energy efficiency (2.1 TFLOPS/W) via a novel experience compression scheme. The second paper, by Lee et al. , proposes another energy-efficient processor for learning. It can flexibly support 8-bit floating point (FP8) to 16-bit floating point (FP16) and achieves 25.3 TFLOPS/W.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.