Abstract

Unmanned aerial vehicle (UAV)-assisted computation offloading allows mobile devices (MDs) to process computation-intensive and latency-sensitive tasks with limited or no-available infrastructures. To achieve long-term performance under changing environment, deep reinforcement-based methods have been applied to solve the UAV-assisted computation offloading problem. However, the deployment of multiple UAVs for computation offloading in mobile edge computing (MEC) network still faces the challenge of lacking flexible learning scheme to efficiently adjust computation offloading policy according to dynamic UAV mobility pattern and UAV failure. To this end, a distributed deep reinforcement learning (DRL)-based method with the cooperative exploring and prioritized experience replay (PER) is proposed in this paper. Our distributed exploring process achieves flexible learning scheme under UAV failure by allowing MDs to learning cost-efficient offloading policy cooperatively. Furthermore, PER allows MDs can explore the transitions with high TD-error, which can improve the performance under dynamic UAV mobility patterns. The efficiency of the proposed method is demonstrated by comparing with the existing computation offloading methods, and results show that the proposed method outperforms the compared methods in terms of convergence rate, energy-task efficiency and average processing time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call