Abstract

AbstractDue to its high maneuverability and flexibility, there have been a growing popularity to adopt Unmanned Aerial Vehicles (UAVs) on Mobile Edge Computing (MEC), serving as edge platforms in infrastructure- unavailable scenarios, e.g., disaster rescue, field operation. Owing to the weak workload, UAVs are typically equipped with limited computing and energy resources. Hence, it is crucial to design efficient edge computation offloading algorithms which could achieve high edge computing performance while keeping low energy consumption. A variety of UAV assisted computation offloading algorithms have been proposed, most of which focus on the scheduling of computation offloading in a centralized way and could become infeasible when the network size increases greatly. To address the issue, we propose a semi-distributed computation offloading framework based on Multi-Agent Twin Delayed (MATD3) deep deterministic policy gradient to minimize the average system cost of the MEC network. We adopt the actor-critic reinforcement learning framework to learn an offloading decision model for each User Equipment (UE), so that each UE could make near-optimal computation offloading decisions by its own and does not suffer from the booming of the network size. Extensive experiments are carried out via numerical simulation and the experimental results verify the effectiveness of the proposed algorithm.KeywordsMobile edge computingDeep reinforcement learningUnmanned aerial vehicleComputation offloadingResource allocation

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call