In recent years, artificial neural networks (ANNs), especially deep neural networks (DNNs), have become a promising new approach in the field of numerical computation due to their high computational efficiency on heterogeneous platforms and their ability to fit high-dimensional complex systems. In the process of numerically solving the partial differential equations, the large-scale linear equations are usually the most time-consuming problems; therefore, utilizing the neural network methods to solve linear equations has become a promising new idea. However, the direct prediction of deep neural networks still has obvious shortcomings in numerical accuracy, which becomes one of the bottlenecks for its application in the field of numerical computation. To break this limitation, a solving algorithm combining Residual network architecture and correction iteration method is proposed in this paper. In this paper, a deep neural network-based method for solving linear equations is proposed to accelerate the solving process of partial differential equations on heterogeneous platforms. Specifically, Residual network resolves the problems of network degradation and gradient vanishing of deep network models, reducing the loss of the network to 1/5000 of the classical network model; the correction iteration method iteratively reduce the error of the prediction solution based on the same network model, and the residual of the prediction solution has been decreased to 10−5 times of that before the iteration. To verify the effectiveness and universality of the proposed method, we combined the method with the finite difference method to solve the heat conduction equation and the Burger’s equation. Numerical results demonstrate that the algorithm has more than 10 times the acceleration effect for equations of size larger than 1000, and the numerical error is lower than the discrete error of the second-order difference scheme.
Read full abstract