Deep neural networks (NNs) encounter scalability limitations when confronted with a vast array of neurons, thereby constraining their achievable network depth. To address this challenge, we propose an integration of tensor networks (TN) into NN frameworks, combined with a variational DMRG-inspired training technique. This in turn, results in a scalable tensor neural network (TNN) architecture capable of efficient training over a large parameter space. Our variational algorithm utilizes a local gradient-descent technique, enabling manual or automatic computation of tensor gradients, facilitating design of hybrid TNN models with combined dense and tensor layers. Our training algorithm further provides insight on the entanglement structure of the tensorized trainable weights and correlation among the model parameters. We validate the accuracy and efficiency of our method by designing TNN models and providing benchmark results for linear and non-linear regressions, data classification and image recognition on MNIST handwritten digits.
Read full abstract