Abstract

High-performance systems that form the compute backbone of the IoT are increasingly challenged by the stringent compute requirements of emerging cloud-native applications. This is particularly evident for modern machine-learning and artificial intelligence applications. As an example, consider the neural network (NN) training problem. NNs for natural language processing applications already require training a trillion parameters (shown in <xref ref-type="fig" rid="fig1">Figure 1</xref>), thereby imposing a 100&#x00D7; growth in compute requirements, all within a year <xref ref-type="bibr" rid="ref1">[1]</xref>&#x0021;

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call