Abstract

Taking the velocity regulation of autonomous driving as an example, this paper developed an enhancing active disturbance rejection design employing the deep reinforcement learning-deep deterministic policy gradient (DDPG). In this scheme, the active disturbance rejection control (ADRC) is adopted to online estimate and compensate disturbances and uncertainties, and feasible regions of control parameters are obtained through the Lyapunov method. Then DDPG is combined with ADRC to online adaptively tune control parameters in response to the changing environments, where safety, comfort, and energy-saving are considered in the reward design, and mapping relation between the defined action and state is constructed for the maximal reward. Besides, numerical simulations demonstrate the better performance and stronger robustness of the enhancing design when facing uncertainties, sensor noise, and mechanical faults, and comfort and energy consumption have also been improved to some extent compared with the general ADRC and model predictive control (MPC).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.