Abstract

The development of the mobile Internet and the success of deep learning in many applications have driven the need to deploy and apply deep learning models on mobile devices under the condition of limited resources. Long Short-Term Memory (LSTM), as a special scheme in deep learning, can learn long-distance dependencies hidden in time series. However, the high computational complexity of LSTM-related structures and the need for a large number of resources for training have become obstacles to their deployment on mobile devices. In order to reduce the resource requirements and computational costs of LSTMs, we use pruning strategies to preserve important connections during the training phase. After training, we reduce the complexity of LSTMs network by sharing weight strategy.Based on these strategies, we propose a sparse connected LSTM with a sharing weight (SCLSTM) model. The experimental results on the real data sets show that SCLSTM with 0.88% neural connections can obtain prediction capabilities comparable to densely connected LSTM. Moreover, SCLSTM can solve the problem of overfitting to some extent. The results of experiments demonstrate that SCLSTM can perform better than the-state-of-arts algorithm on mobile devices of limited resources.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call