Abstract
The model-free optimal control problem for discrete-time systems is considered in this paper by using deep deterministic policy gradient adaptive dynamic programming (DDPGADP) algorithm. The system data is obtained by using the off-policy learning and the control law is updated by policy gradient. The convergence of DDPGADP algorithm is verified by showing that the Q-function sequence is monotonically non-increasing and converges to the optimum. In order to implement this method, an actor-critic neural network structure is established by adopting the target network technology from deep Q-learning during the neural network training process. Finally, simulation examples are presented to verify the effectiveness of the proposed method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.