Abstract

Fog computing has been proposed to reduce the latency of cloud computing. One of the most common challenges in fog computing is scheduling. Suitable scheduling increases the efficiency of fog computing. There is a wide range of methods of scheduling that have been presented by the researchers each with its strengths and weaknesses. This paper presents a novel scheduling method for fog computing based on the Divisible Load Theory (DLT). The proposed method has significant benefits for loads that are arbitrarily divisible and for huge and intense data. First, the proposed method is modeled, and then, based on DLT, the related closed form is proposed for both linear and nonlinear modes. Subsequently, the closed forms are solved, and based on that, an innovative algorithm for the partitioning and distribution of fractions of an arbitrary divisible load among nodes in a fog environment is proposed. The performance of the proposed method is compared with existing algorithms. The simulated results show that using the DLT method significantly optimizes the performance a minimum of seven times. It reduces the finish time (about eight times in linear and eighty-five times in nonlinear loads) and increases the speedup (about seven times in linear and one hundred thirty times in nonlinear loads).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call