Abstract

In the dynamic power management (DPM), it is quite important to switch a high-power consuming state to a low-power consuming one at a suitable timing. This problem has been formulated as Markov decision processes (MDPs) with state-dependent control in the past literature. However, this approach may not be often feasible in many practical situations, because the state-dependent policy requires that all the states on a request arrival process must be observed through an online monitoring. To overcome this problem, we develop a simple time-out policy in the DPM, which can be regarded as the optimal timing to take a GO-SLEEP action during an idle period of the transaction system. We derive the optimal time-out policy minimizing the expected power consumption per unit time in the steady state analytically under the assumption that the request arrival process is given by a Markovian arrival process with an arbitrary number of phases. In numerical experiments with real read/write data for a hard disk unit and CPU utilization data, we estimate the expected power consumption per unit time under two DPM policies, namely, the time-out policy and the MDP-based policy, and quantitatively compare their effectiveness on the power reduction.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.