Abstract

In our previous work we have shown that resistive cross point devices, so called resistive processing unit (RPU) devices, can provide significant power and speed benefits when training deep fully connected networks as well as convolutional neural networks. In this work, we further extend the RPU concept for training recurrent neural networks (RNNs) namely LSTMs. We show that the mapping of recurrent layers is very similar to the mapping of fully connected layers and therefore the RPU concept can potentially provide large acceleration factors for RNNs as well. In addition, we study the effect of various device imperfections and system parameters on training performance. Symmetry of updates becomes even more crucial for RNNs; already a few percent asymmetry results in an increase in the test error compared to the ideal case trained with floating point numbers. Furthermore, the input signal resolution to the device arrays needs to be at least 7 bits for successful training. However, we show that a stochastic rounding scheme can reduce the input signal resolution back to 5 bits. Further, we find that RPU device variations and hardware noise are enough to mitigate overfitting, so that there is less need for using dropout. Here we attempt to study the validity of the RPU approach by simulating large scale networks. For instance, the models studied here are roughly 1500 times larger than the more often studied multilayer perceptron models trained on the MNIST dataset in terms of the total number of multiplication and summation operations performed per epoch.

Highlights

  • We note that the total number of trainable parameters for the largest networks trained here is about 3.4M and the total number of multiplication and summation operations that needs to be performed during a single training epoch is about 1014

  • We explored the applicability of the resistive processing unit (RPU) device concept to training long short-term memory (LSTM)

  • We found that training LSTM blocks is very similar to the training of fully connected layers, because a single vector operation can be used to perform all linear where the factor of two comes from the multiplication and the summation operations performed on each RPU device

Read more

Summary

Introduction

Deep neural networks (DNNs) (LeCun et al, 2015) have made tremendous improvements in the past few years tackling challenging problems such as speech recognition (Hinton et al, 2012; Ravanelli et al, 2017), natural language processing (Collobert et al, 2012; Jozefowicz et al, 2016), image classification (Krizhevsky et al, 2012; Chen et al, 2017), and machine translation (Wu, 2016) These accomplishments became possible thanks to advances in computing resources, availability of large amounts of data, and clever choices of neural network architectures. In addition to the digital approaches, resistive cross-point device arrays are promising candidates that perform the multiplication and summation operations in the analog domain which can provide massive acceleration and power benefits (Gokmen and Vlasov, 2016)

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.