Abstract

In this paper, we continue to study our neural network (NN) approach to the construction of approximate solutions of boundary value problems for partial differential equations. We test the NN learning algorithms on the example of the Dirichlet problem for the Laplace equation in the unit square. We do not use linearity and other features of the Laplace equation; therefore we expect the similar behavior of the algorithms under study for other problems. In the first algorithm, we use a constant number of neurons and train the entire network at once. The second and third algorithms are evolutionary ones, i.e., the structure of the NN (the number of neurons) changes in the learning process. In the second algorithm, we add neurons one by one through a certain number of training steps and conduct additional training of the entire resulting NN. In the third algorithm, we add neurons in the same way, but the learning process is only for the last added neuron. In all series, we use the maximum number of neurons equal to 10, which was enough to achieve an acceptable quality of training. We estimate the quality of NN training by the error functional consisting of two terms which are responsible for satisfying the equation and boundary conditions. These summands are calculated on the set of trial points (TP). We compared the quality of NN training in the case of constant TP and the application of the proposed procedure of re-generation of TP. Computational experiments have shown that the procedure of TP re-generation with a small number of them allows reducing the error by order of magnitude.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call