Abstract

There is a wave of interest in using physics-informed neural networks for solving differential equations. Most of the existing methods are based on feed-forward networks, while recurrent neural networks solvers have not been extensively explored. We introduce a reservoir computing (RC) architecture, an echo-state recurrent neural network capable of discovering approximate solutions that satisfy ordinary differential equations (ODEs). We suggest an approach to calculate time derivatives of recurrent neural network outputs without using back-propagation. The internal weights of an RC are fixed, while only a linear output layer is trained, yielding efficient training. However, RC performance strongly depends on finding the optimal hyper-parameters, which is a computationally expensive process. We use Bayesian optimization to discover optimal sets in a high-dimensional hyper-parameter space efficiently and numerically show that one set is robust and can be transferred to solve an ODE for different initial conditions and time ranges. A closed-form formula for the optimal output weights is derived to solve first-order linear equations in a one-shot backpropagation-free learning process. We extend the RC approach by solving nonlinear systems of ODEs using a hybrid optimization method consisting of gradient descent and Bayesian optimization. Evaluation of linear and nonlinear systems of equations demonstrates the efficiency of the RC ODE solver.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call