The objective is to address the control problem associated with the landing process of unmanned aerial vehicles (UAVs), with a particular focus on fixed-wing UAVs. The Proportional–Integral–Derivative (PID) controller is a widely used control method, which requires the tuning of its parameters to account for the specific characteristics of the landing environment and the potential for external disturbances. In contrast, neural networks can be modeled to operate under given inputs, allowing for a more precise control strategy. In light of these considerations, a control system based on reinforcement learning is put forth, which is integrated with the conventional PID guidance law to facilitate the autonomous landing of fixed-wing UAVs and the automated tuning of PID parameters through the use of a Deep Q-learning Network (DQN). A traditional PID control system is constructed based on a fixed-wing UAV dynamics model, with the flight state being discretized. The landing problem is transformed into a Markov Decision Process (MDP), and the reward function is designed in accordance with the landing conditions and the UAV’s attitude, respectively. The state vectors are fed into the neural network framework, and the optimized PID parameters are output by the reinforcement learning algorithm. The optimal policy is obtained through the training of the network, which enables the automatic adjustment of parameters and the optimization of the traditional PID control system. Furthermore, the efficacy of the control algorithms in actual scenarios is validated through the simulation of UAV state vector perturbations and ideal gliding curves. The results demonstrate that the controller modified by the DQN network exhibits a markedly superior convergence effect and maneuverability compared to the unmodified traditional controller.
Read full abstract