Abstract

Collaborative cloud–edge computing has been systematically developed to balance the efficiency and cost of computing tasks for many emerging technologies. To improve the overall performance of cloud–edge system, existing works have made progress in task scheduling by dynamically distributing the tasks with different latency thresholds to edge and cloud nodes. However, the relationship of multi-resource queueing among different tasks within a node is not well studied, which leaves the merit of optimizing the multi-resource queueing unexplored. To fill this gap and improve the efficiency of cloud–edge system, we propose DeepMIC, a deep reinforcement learning (DRL)-based multi-resource interleaving scheme for task scheduling in cloud–edge system. First, we formulate a multi-resource queueing model aiming at minimizing the weighted-sum delay of the pending tasks. The proposed model jointly considers the requests for computation, caching, and forwarding resources within a node based on the network information collected through Software-Defined Networking (SDN) and the management framework of Mobile Edge Computing (MEC). Then, we customize a DRL algorithm to ensure a timely solution of the model, which caters to the high throughput of tasks. Finally, we demonstrate that through the flexible scheduling of the tasks, DeepMIC reduces the average task response time and achieves better resource utilization.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.