Abstract

Many challenging real-world control problems require adaptation and learning in the presence of uncertainty. Examples of these challenging domains include aircraft adaptive control under uncertain disturbances [1], [2], multiple-vehicle tracking with space-dependent uncertain dynamics [3], [4], robotic-arm control [5], blimp control [6], [7], mobile robot tracking and localization [8], [9], cart-pole systems and unicycle control [10], gait optimization in legged robots [11] and snake robots [12], and any other system whose dynamics are uncertain and for which limited data are available for model learning. Classical model reference adaptive control (MRAC) [13]-[15] and reinforcement learning (RL) methods [16]-[23] have been developed to address these challenges and rely on parametric adaptive elements or control policies whose number of parameters or features are fixed and determined a priori. One example of such an adaptive model are radial basis function networks (RBFNs), with RBF centers pre-allocated based on expected operating domains [24], [25].

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.