Abstract

Multi-dimensional uncertainties often modulate modern system dynamics in a complicated fashion. They lead to challenges for real-time control, considering the significant computation load needed to evaluate them in real-time decision processes. This chapter describes the use of computationally effective uncertainty evaluation methods for adaptive optimal control, including learning control and differential games. Two uncertainty evaluation methods are described, the multivariate probabilistic collocation method (MPCM) and its extension the MPCM-OFFD that integrates the MPCM with the orthogonal fractional factorial design (OFFD) to break the curse of dimensionality. These scalable uncertainty evaluation methods are then developed for reinforcement learning (RL)-based adaptive optimal control. Stochastic differential games, including the two-player zero-sum and multi-player nonzero-sum games, are formulated and investigated. Nash equilibrium solutions for these games are found in real time using the MPCM-based on-policy/off-policy RL methods. Real-world applications on broad-band long-distance aerial networking and strategic air traffic management demonstrate the practical use of MPCM- and MPCM-OFFD-based learning control for uncertain systems.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.