Abstract

Air ground integrated mobile cloud computing (MCC) provides unmanned aerial vehicles (UAVs) the capability to act as an aerial relay with more flexibility and resilience. In the cloud computing architecture, the data generated by ground users (GUs) can be offloaded to the remote server for fast processing. However, the heterogeneity of mobile tasks makes the data size distributed among GUs unbalanced. Besides, the energy efficiency of UAVs movement should be carefully considered for sustainable flight and obstacle avoidance. In general, such a joint trajectory issue can hardly be formulated as a convex optimization in unpredictable and dynamic environments. This paper proposes a potential game combined multi-agent deep deterministic policy gradient (MADDPG) approach to optimize multiple UAVs' trajectory with the consideration of GUs' offloading delay, energy efficiency as well as obstacle avoidance system. In specific, we first model the issue as a mixed integer non-linear problem (MINP), in which the service assignment between multi-user and multi-UAV is solved by potential game. The convergence to a Nash Equilibrium (NE) can be achieved by distributive service assignment update with infinite iteration. Then, we optimize the trajectory with obstacle avoidance at each UAV by MADDPG approach, which has a great advantage of centralized-training and decentralized-execution to reduce the global synchronized communication overhead. UAVs movement can be optimized in continuity rather than other deep reinforcement learning (DRL) approaches generating discrete simple actions. Experiments demonstrate the proposed game-combined learning algorithm can minimize the offloading delay, enhance UAVs’ energy efficiency and avoid the obstacles at the same time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call