Abstract

Designing an efficient learning-based model predictive control (MPC) framework for ducted-fan unmanned aerial vehicles (DFUAVs) is a difficult task due to several factors involving uncertain dynamics, coupled motion, and unorthodox aerodynamic configuration. Existing control techniques are either developed from largely known physics-informed models or are made for specific goals. In this regard, this article proposes a compound learning-based MPC approach for DFUAVs to construct a suitable framework that exhibits efficient dynamics learning capability with adequate disturbance rejection characteristics. At the start, a nominal model from a largely unknown DFUAV model is achieved offline through sparse identification. Afterward, a reinforcement learning (RL) mechanism is deployed online to learn a policy to facilitate the initial guesses for the control input sequence. Thereafter, an MPC-driven optimization problem is developed, where the obtained nominal (learned) system is updated by the real system, yielding improved computational efficiency for the overall control framework. Under appropriate assumptions, stability and recursive feasibility are compactly ensured. Finally, a comparative study is conducted to illustrate the efficacy of the designed scheme.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.