Abstract

Computational offloading systems, where computational tasks can be processed locally or offloaded to a remote cloud, have become prevalent since the advent of cloud computing. The task scheduler in a computational offloading system decides both the selection of tasks to be offloaded to the remote cloud and the scheduling of tasks on the local processors. In this work, we consider the problem of minimizing a weighted sum of the makespan of the tasks and the offloading cost at the remote cloud. In contrast to prior works, we do not assume that the task processing times are known a priori. We show that the original problem can be solved by algorithms designed toward minimizing the maximum between the makespan and the weighted offloading cost, only with doubling of the competitive ratio. Furthermore, when the remote cloud is much faster than the local processors, the latter problem can be equivalently transformed into a makespan minimization problem with unrelated processors. For this case, we propose a Greedy-One-Restart (GOR) algorithm based on online estimation of the unknown processing times, and one-time cancellation and rescheduling of tasks that turn out to require long processing times. Given <inline-formula><tex-math notation="LaTeX">$m$</tex-math></inline-formula> local processors, we show that GOR has <inline-formula><tex-math notation="LaTeX">$O(\sqrt{m})$</tex-math></inline-formula> competitive ratio, which is a substantial improvement over the best known algorithms in the literature. For the general case of arbitrary speed at the remote cloud, we extend GOR to a Greedy-Two-Restart (GTR) algorithm and show that it is <inline-formula><tex-math notation="LaTeX">$O(\sqrt{m})$</tex-math></inline-formula> -competitive. Furthermore, where tasks arrive dynamically with unknown arrival times, we extend GOR and GTR to Dynamic-GOR (DGOR) and Dynamic-GTR (DGTR), respectively, and find their competitive ratios. Finally, we discuss how GOR can be extended to accommodate multiple remote processors. In addition to performance bounding by competitive ratios, our simulation results demonstrate that the proposed algorithms are favorable also in terms of average performance, in comparison with the well-known list scheduling algorithm and other alternatives.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.