Traditional task uninstallation methods are difficult to cope with the continuous changes in network environment and device status, therefore a more intelligent solution is urgently needed. This article studied the optimization method of task uninstallation in mobile edge computing (MEC) environment, and studied the optimization method of task uninstallation in combination with improved deep Q-learning and transmission learning (TL). Firstly, it provided an overview of the basic concepts and optimization problems of task uninstallation in MEC environments, and then delved into the application of improved deep Q-learning in task uninstallation optimization, namely the application of deep Q-network (DQN) in task uninstallation optimization. Next, this article designed the TL-DQN method by combining DQN and TL. This method utilizes the DQN algorithm to construct a model that can handle complex states and action spaces. Meanwhile, by introducing TL, the model can quickly transfer knowledge and achieve adaptive optimization of tasks in new environments, thereby improving the generalization ability and efficiency of the decision-making process. Experiments have shown that TL-DQN performs better in convergence speed compared to other algorithms. For delay sensitive users, the delay growth of TL-DQN algorithm is relatively slow. When the task volume reached 10, the delay of TL-DQN was 7.97 s, which was 1.99 s lower than reinforcement learning. When the number of unloading tasks increased to 500, the energy consumption of TL-DQN was 554 J, which was 282 J lower than the energy consumption required by the task uninstallation method constructed by reinforcement learning algorithms. The task uninstallation method studied in this article has achieved significant results in reducing latency, improving energy efficiency, and adapting to network dynamic changes, bringing users a more stable and reliable service experience.
Read full abstract