Abstract

Due to the exponential expansion of the Internet of Things (IoT) technology, the increasing amount of data generated by numerous IoT devices has become a significant challenge for communication and computing networks. On the other hand, the concept of Mobile Edge Computing (MEC) presents a viable solution as it brings computing, storage, and networking capabilities in close proximity to the user, enabling the hosting of applications that require significant computation power and low latency right at the network’s edge. In addition, the maneuverability, cost-effectiveness, and ease of deployment make Unmanned Aerial Vehicles (UAVs) highly versatile wireless platforms. Capitalizing on the benefits of MEC and UAVs, this paper proposes a UAV-assisted MEC system that offloads services to reduce the computation burden in IoT. The objective is to maximize the task completion rate, considering factors such as user transmit power, task offloading rate, and UAV trajectory variables. To tackle this optimization problem, this paper devises a method based on the Deep Deterministic Policy Gradient (DDPG), an algorithm for continuous action spaces in deep reinforcement learning. The numerical simulations show the effectiveness of the proposed approach in comparison to other benchmarks, i.e., deep Q-network (DQN) and full offloading methods. In a simulation, the proposed DDPG method can achieve a fully optimized system with a 100% task success rate, while the DQN-based method achieves a lower task success rate of 95.93% and the full offloading method only obtains a task success rate of 82.73%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call