Collaborative cloud–edge computing has been systematically developed to balance the efficiency and cost of computing tasks for many emerging technologies. To improve the overall performance of cloud–edge system, existing works have made progress in task scheduling by dynamically distributing the tasks with different latency thresholds to edge and cloud nodes. However, the relationship of multi-resource queueing among different tasks within a node is not well studied, which leaves the merit of optimizing the multi-resource queueing unexplored. To fill this gap and improve the efficiency of cloud–edge system, we propose DeepMIC, a deep reinforcement learning (DRL)-based multi-resource interleaving scheme for task scheduling in cloud–edge system. First, we formulate a multi-resource queueing model aiming at minimizing the weighted-sum delay of the pending tasks. The proposed model jointly considers the requests for computation, caching, and forwarding resources within a node based on the network information collected through Software-Defined Networking (SDN) and the management framework of Mobile Edge Computing (MEC). Then, we customize a DRL algorithm to ensure a timely solution of the model, which caters to the high throughput of tasks. Finally, we demonstrate that through the flexible scheduling of the tasks, DeepMIC reduces the average task response time and achieves better resource utilization.
Read full abstract