Abstract

In this paper, the Bellman equation is used to solve the stochastic optimal control of unknown linear discrete-time system with communication imperfections including random delays, packet losses and quantization. A dynamic quantizer for the sensor measurements is proposed which essentially provides system states to the controller. To eliminate the effect of the quantization error, the dynamics of the quantization error bound and an update law for tuning its range are derived. Subsequently, by using adaptive dynamic programming technique, the infinite horizon optimal regulation of the uncertain NCS is solved in a forward-in-time manner without using value and/or policy iterations by using Q-function and reinforcement learning. The asymptotic stability of the closed-loop system is verified by standard Lyapunov stability theory. Finally, the effectiveness of the proposed method is verified by simulation results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call