Abstract
Cloud computing holds great promise for High Performance Computing (HPC) users and applications. Accordingly a large amount of work has gone into exploring and enabling the use of current cloud service architectures to support HPC applications. While the allure of cloud based HPC systems is very compelling, there are still a number of issues that prevent the cloud from becoming a truly viable HPC platform. In particular, application performance has been found to suffer from competing workloads, randomized layouts and node assignments, as well as competing network flows. All of these issues arise from the fact that HPC applications are forced to share and compete for resources along with a wide variety of other commodity applications. Unfortunately, the presence of these competing workloads is critical to the success of the cloud model, which relies on the economics of leveraging shared resources. While this inherent tension has so far acted as a barrier to the deployment of HPC applications in the cloud, we claim that it can be overcome using multiple specialized system software stacks that are capable of providing isolated partitions for co-located workloads. This talk will focus on the design and development of system software capable of effectively supporting HPC applications in a commodity cloud environment through the use of dynamic resource partitioning and isolated management layers.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.