Abstract

Power consumption is the main obstacle against performance scalability of cloud datacenters, since it is currently difficult and economically infeasible to cool datacenter facilities consuming more than around 50 Mega Watt (MW). Given the high-energy consumption of general-purpose multi-core servers, there is an urgent need to equip datacenter servers with energy-efficient domain-specific cores or accelerators for performance scalability of cloud datacenters under a constant power envelope. For this to happen, middleware, such as Apache SPARK, has to be modified to be able to seamlessly distribute tasks across domain heterogeneous multicore server nodes and domain-specific accelerators. In this work, we propose a minimalistic set of extensions to spark middleware runtime for seamless integration of domain specific accelerators. We show an end-to-end system design and programming model for mapping computational tasks sent from the driver program to accelerators on worker nodes. We discuss how our system design and programming model fit into popular big data analytics kernels, such as logistic regression and k-means, and we also show significant performance and energy-efficient performance improvements over clusters of multicore servers. Finally, we discuss some of the opportunities and challenges that our research community must tackle before final deployment of accelerator-augmented servers into production datacenters.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.