Abstract
As a new type of computing, cloud computing has led to a major computational change. Among many technologies in cloud computing, task scheduling has always been studied as a core issue by industry and academia. In the existing research, the main goal is completion time or load balancing. However, as the expansion of cluster size, energy consumption becomes a problem that must be faced. In this paper, the first of maximum loss scheduling algorithm is proposed. The algorithm is a low-power algorithm that can greatly reduce the energy consumption of cloud computing clusters through loss comparison rule. The effect of this method is more obvious as the cluster size and the number of tasks increase. Experimental simulation results show that the proposed method is significantly better than the Max–Min, Min–Min, Sufferage and E-HEFT algorithms. Compared to Min–Min, Max–Min, Sufferage and E-HEFT algorithms, average completion time of the algorithm reduces 16%, 12%, 8% and 14%, respectively. At the same time, the load balancing effect is also better than Min–Min and Sufferage algorithms.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.