Abstract
The Worldwide LHC Computing Grid (WLCG) is today comprised of a range of different types of resources such as cloud centers, large and small HPC centers, volunteer computing as well as the traditional grid resources. The Nordic Tier 1 (NT1) is a WLCG computing infrastructure distributed over the Nordic countries. The NT1 deploys the Nordugrid ARC-CE, which is non-intrusive and lightweight, originally developed to cater for HPC centers where no middleware could be installed on the worker nodes. The NT1 runs ARC in the native Nordugrid mode which contrary to the Pilot mode leaves jobs data transfers up to ARC. ARCs data transfer capabilities together with the ARC Cache are the most important features of ARC. In this article we will describe the datastaging and cache functionality of the ARC-CE set up as an edge service to an HPC or cloud resource, and show the gain in efficiency this model provides compared to a traditional pilot model, especially for sites with remote storage.
Highlights
The Nordugrid Advanced Resource Connector (ARC) [1, 2] middleware was originally developed to allow Nordic compute and storage facilities contribute to the large compute needs of the LHC experiments, which for the Nordic countries primarily meant the ATLAS [3] experiment.The Nordic sites, with their heterogeneous systems, wide-area network inaccessibility from the worker nodes, and separated compute and storage required special middleware which at the time was not available
The Worldwide LHC Computing Grid (WLCG) is today comprised of a range of different types of resources such as cloud centers, large and small HPC centers, volunteer computing as well as the traditional grid resources
The Nordic Tier 1 (NT1) deploys the Nordugrid ARC-CE, which is non-intrusive and lightweight, originally developed to cater for HPC centers where no middleware could be installed on the worker nodes
Summary
The Nordugrid Advanced Resource Connector (ARC) [1, 2] middleware was originally developed to allow Nordic compute and storage facilities contribute to the large compute needs of the LHC experiments, which for the Nordic countries primarily meant the ATLAS [3] experiment. As part of the migration plan, ARC was first run in the true-pilot mode, and later in the native mode. This provided a good opportunity to measure the job’s CPU efficiency in the two modes, compare them to each other, in addition to comparing the CPU efficiencies to the other NT1 (HPC) sites. The longer time it takes before a job’s computation starts, due to e.g. downloading of input-data, the longer will the walltime be and the lower will CPU be, for the same CPU-time
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.