Abstract

In order to realize high performance information systems, server cluster systems are widely used. Here, since application processes are performed on multiple servers, the larger electric energy is consumed in a server cluster. The delay time-based (DTB) algorithm is discussed to select a server for each request process so that the total energy consumption of a server cluster to perform application processes can be reduced. However, in the DTB algorithm, if the average interarrival time of request processes is shorter than the minimum computation time of each process, the average response time of each process increases. This means, computation resources in a server cluster cannot be efficiently used in the DTB algorithm. In this paper, we propose an improved DTB (IDTB) algorithm to reduce the total energy consumption of a server cluster and more efficiently to use the computation resources for performing application processes in a server cluster even if the average interarrival time of request processes is shorter than the minimum computation time of each process. We evaluate the IDTB algorithm compared with the basic round-robin (RR), improved power consumption laxity-based (IPCLB), and DTB algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call