Abstract

In many scientific and engineering problems, noise and nonlinearity are unavoidable, which could induce interesting mathematical problem such as transition phenomena. This paper focuses on efficiently discovering the most probable transition pathway for stochastic dynamical systems employing reinforcement learning. With the Onsager–Machlup action functional theory to quantify rare events in stochastic dynamical systems, finding the most probable pathway is equivalent to solving a variational problem on the action functional. When the action function cannot be explicitly expressed by paths near the reference orbit, the variational problem needs to be converted into an optimal control problem. First, by integrating terminal prediction into the reinforcement learning framework, we develop a Terminal Prediction Deep Deterministic Policy Gradient (TP-DDPG) algorithm to deal with the finite-horizon optimal control issue in a forward way. Next, we present the convergence analysis of our algorithm for the value function in terms of the neural network’s approximation error and estimation error. Finally, we conduct various experiments in different dimensions for the transition problems in applications to illustrate the effectiveness of our algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call