Abstract

Next-generation unmanned aerial vehicles (UAVs) should possess the capability to autonomously perform multiple tasks with agility and efficiency in complex obstructive environments and over open terrains. Morphing UAVs can autonomously transform in response to changes in flight environments and tasks, always maintaining an optimal aerodynamic profile. The falcon-inspired morphing UAV folds/twists its wings/tail simultaneously to accomplish diving/pull-out flight during the predation process. In the diving/pull-out flight, falcon-inspired morphing UAVs are able to balance maneuverability and stability, which is hard to realize by current control methods. This paper proposes a deep reinforcement learning (DRL)-based diving/pull-out cooperative control strategy. Considering the continuity of the state space and the action space of morphing UAVs, the deep deterministic policy gradient (DDPG) algorithm based on the actor-critic (AC) network is adopted and refined. With the aim of ensuring the smoothness of the flight action, the proposed DRL-based strategy is tasked with controlling multiple data frames of airspeed, altitude and pitch angle to desired reference values. Numerical experiments have been conducted on fixed-speed ascent/descent flight and diving/pull-out maneuvers flight missions. The results demonstrate the superiority of the proposed DRL-based control strategy compared with a classical proportional-integral-derivative (PID) control strategy. Furthermore, the proposed DRL controller is shown to generalize well to the random white noises which are added to gyroscope measurements and wind disturbance in flight.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call