Abstract

In the past decade, there has been considerable research interest in learning control methods based on reinforcement learning (RL) and approximate dynamic programming (ADP). As an important class of function approximation techniques, kernel methods have been recently applied to improve the generalization ability of RL and ADP methods but most previous works were only based on simulation. This paper focuses on experimental studies of real-time online learning control for nonlinear systems using kernel-based ADP methods. Specifically, the kernel-based dual heuristic programming (KDHP) method is applied and tested on real-time control systems. Two kernel-based online learning control schemes are presented for uncertain nonlinear systems by using simulation data and online sampling data, respectively. Learning control experiments were performed on a single-link inverted pendulum system as well as a double-link inverted pendulum system. From the experimental results, it is shown that both online learning control schemes, either using simulation data or using real sampling data, are effective for approximating near-optimal control policies of nonlinear dynamical systems with model uncertainties. In addition, it is demonstrated that KDHP can achieve better performance than conventional DHP, which uses multilayer perceptron neural networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call