Abstract

The annual electricity consumed by data transfers in the U.S. is estimated to be 20TWh, which translates to around 4billion U.S. Dollars per year. There has been considerable amount of prior work looking at power management and energy efficiency in hardware and software systems, and more recently in power-aware networking. Despite the growing body of research in power management techniques for the networking infrastructure, there has been no prior work (to the best of our knowledge), focusing on saving energy at the end-systems (sender and receiver nodes) during the data transfer. We argue that although network-only approaches are important part of the solution, the end-system power management is another key in optimizing energy efficiency of the data transfers, which has been long ignored. In this paper, we analyze various factors that will affect the power consumption in end-to-end data transfers, such as the level of parallelism, concurrency and pipelining, as well as the CPU frequency level at the end-systems. Our results show that significant amount of energy savings (up to 60%) can be achieved at the end-systems during data transfer with no or minimal performance penalty if the correct parameter combination is used.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.