Abstract
High Energy Physics (HEP) experiments will enter a new era with the start of the HL-LHC program, with computing needs surpassing by large factors the current capacities. Anticipating such scenario, funding agencies from participating countries are encouraging the experimental collaborations to consider the rapidly developing High Performance Computing (HPC) international infrastructures to satisfy at least a fraction of the foreseen HEP processing demands. These HPC systems are highly non-standard facilities, custom-built for use cases largely different from HEP demands, namely the processing of particle collisions (real or simulated) which can be analyzed individually without correlation. The access and utilization of these systems by HEP experiments will not be trivial, given the diversity of configuration and requirements for access among HPC centers, increasing the level of complexity from the HEP experiment integration and operations perspectives. Additionally, while HEP data is residing on a distributed highly-interconnected storage infrastructure, HPC systems are in general not meant for accessing large data volumes residing outside the facility. Finally, the allocation policies to these resources are generally different from the current usage of pledged resources deployed at supporting Grid sites. This report covers the CMS strategy developed to make effective use of HPC resources, involving a closer collaboration between CMS and HPC centers in order to further understand and subsequently overcome the present obstacles. Progress in the necessary technical and operational adaptations being made in CMS computing is described.
Highlights
CMS strategy to approach High Performance Computing (HPC) centers is based on the national and local CMS teams doing the handshaking with HPC representatives from their respective nations or regions
A sizable fraction of the total computing power that HPCs will amount to is expected to be provided by processor types other than the standard Intel x86_64 architecture CPUs, which CMS has successfully exploited from the Grid sites supporting the experiment
The CMS team in charge of operations and workload planning aims at achieving a transparent integration of HPC resources, with respect to the assignment of tasks to either High Throughput Computing (HTC) or HPC sites
Summary
High Energy Physics (HEP) experiments, such as CMS, are aiming towards increasing the usage of High Performance Computing (HPC) resources to help cover the expected increase in computing resources needs in the mid to long term future (Run and HL-LHC)[1], while coping with the projected continuation of current levels of funding. In the current international landscape for ever larger scientific projects and bigger scientific computing installations, growing funds are being committed to HPC centers, whose managers are looking onwards to reaching the Exascale for their infrastructures (see, for example [2] and [3]). LHC experiments have taken notice of such trends, which present the opportunity to help cover their growing computing demands while gaining access to the best technologies available in the market, usually employed at HPC sites. The following sections describe the necessary technical and operational adaptations in CMS computing for optimal HPC resource exploitation
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.