Abstract

A deep neural network is a parametrization of a multilayer mapping of signals in terms of many alternatively arranged linear and nonlinear transformations. The linear transformations, which are generally used in the fully connected as well as convolutional layers, contain most of the variational parameters that are trained and stored. Compressing a deep neural network to reduce its number of variational parameters but not its prediction power is an important but challenging problem toward the establishment of an optimized scheme in training efficiently these parameters and in lowering the risk of overfitting. Here we show that this problem can be effectively solved by representing linear transformations with matrix product operators (MPOs), which is a tensor network originally proposed in physics to characterize the short-range entanglement in one-dimensional quantum states. We have tested this approach in five typical neural networks, including FC2, LeNet-5, VGG, ResNet, and DenseNet on two widely used data sets, namely, MNIST and CIFAR-10, and found that this MPO representation indeed sets up a faithful and efficient mapping between input and output signals, which can keep or even improve the prediction accuracy with a dramatically reduced number of parameters. Our method greatly simplifies the representations in deep learning, and opens a possible route toward establishing a framework of modern neural networks which might be simpler and cheaper, but more efficient.

Highlights

  • Deep neural networks [1,2,3,4,5,6,7,8,9,10,11] are important tools of artificial intelligence

  • We show the results obtained with the matrix product operators (MPOs) representation in five kinds of typical neural networks on two data sets, i.e., FC2 [56] and LeNet-5 [2] on the MNIST data set [57]; VGG [9], ResNet [10], and DenseNet [11] on the CIFAR-10 data set [58]

  • Motivated by the success of MPOs in the study of quantum many-body systems with short-range interactions, we propose to use MPOs to represent linear transformation matrices in deep neural networks

Read more

Summary

Introduction

Deep neural networks [1,2,3,4,5,6,7,8,9,10,11] are important tools of artificial intelligence Their applications in many computing tasks, for example, in the famous ImageNet Large Scale Visual Recognition Challenge (ILSVRC) [12], large vocabulary continuous speech recognition [13], and natural language processing [14], have achieved great success. The nonlinear mappings, which contain almost no free parameters, are realized by some operations known as activations, including rectified linear unit, softmax, and so on

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call