Abstract

Proposes a recurrent learning algorithm for designing the controllers of continuous dynamical systems in optimal control problems. The controllers are in the form of unfolded recurrent neural nets embedded with physical laws from classical control techniques. The learning algorithm is characterized by a double forward-recurrent-loops structure for solving both temporal recurrent and structure recurrent problems. The first problem results from the nature of general optimal control problems, where the objective functions are often related to (evaluated at) some specific time steps or system states only, causing missing learning signals at some steps or states. The second problem is due to the high-order discretization of continuous systems by the Runge-Kutta method that we perform to increase accuracy. This discretization transforms the system into several identical interconnected subnetworks, like a recurrent neural net expanded in the time axis. Two recurrent learning algorithms with different convergence properties are derived; first- and second-order learning algorithms. Their computations are local and performed efficiently as net signal propagation. We also propose two new nonlinear control structures for the 2D guidance problem and the optimal PI control problem. Under the training of the recurrent learning algorithms, these controllers can be easily tuned to be suboptimal for given objective functions. Extensive computer simulations show the controllers' optimization and generalization abilities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call