Task offloading strategies for unmanned aerial vehicle (UAV) -assisted mobile edge computing (MEC) systems have emerged as a promising solution for computationally intensive applications. However, the broadcast and open nature of radio transmissions makes such systems vulnerable to eavesdropping threats. Therefore, developing strategies that can perform task offloading in a secure communication environment is critical for both ensuring the security and optimizing the performance of MEC systems. In this paper, we first design an architecture that utilizes covert communication techniques to guarantee that a UAV-assisted MEC system can securely offload highly confidential tasks from the relevant user equipment (UE) and calculations. Then, utilizing the Markov Decision Process (MDP) as a framework and incorporating the Prioritized Experience Replay (PER) mechanism into the Deep Deterministic Policy Gradient (DDPG) algorithm, a PER-DDPG algorithm is proposed, aiming to minimize the maximum processing delay of the system and the correct detection rate of the warden by jointly optimizing resource allocation, the movement of the UAV base station (UAV-BS), and the transmit power of the jammer. Simulation results demonstrate the convergence and effectiveness of the proposed approach. Compared to baseline algorithms such as Deep Q-Network (DQN) and DDPG, the PER-DDPG algorithm achieves significant performance improvements, with an average reward increase of over 16% compared to DDPG and over 53% compared to DQN. Furthermore, PER-DDPG exhibits the fastest convergence speed among the three algorithms, highlighting its efficiency in optimizing task offloading and communication security.
Read full abstract