Abstract

Computing resource needs are expected to increase drastically in the future. The HEP experiments ATLAS and CMS foresee an increase of a factor of 5-10 in the volume of recorded data in the upcoming years. The current infrastructure, namely the WLCG, is not sufficient to meet the demands in terms of computing and storage resources. The usage of non HEP specific resources is one way to reduce this shortage. However, using them comes at a cost: First, with multiple of such resources at hand, it gets more and more diffcult for the single user, as each resource normally requires its own authentication and has its own way of accessing it. Second, as they are not specifically designed for HEP workflows, they might lack dedicated software or other necessary services. Allocating the resources at the different providers can be done by COBalD/TARDIS, developed at KIT. The resource manager integrates resources on demand into one overlay batch system, providing the user with a single point of entry. The software and services, needed for the communities workflows, are transparently served through containers. With this, an HPC cluster at RWTH Aachen University is dynamically and transparently integrated into a Tier 2 WLCG resource, virtually doubling its computing capacities.

Highlights

  • Several HEP experiments are provided with computing resources through the Worldwide LHC Computing Grid (WLCG)

  • In order to automate the integration of external resources, KIT develops “COBalD – the Opportunistic Balancing Daemon” [2] (COBalD) and the “Transparent Adaptive Resource Dynamic Integration System” [3] (TARDIS)

  • It allows for multiple different resource types to be integrated into one overlay batch system, which is acting as a single point of entry to the users

Read more

Summary

Usage of non-community specific resources

Several HEP experiments are provided with computing resources through the Worldwide LHC Computing Grid (WLCG). The 13 Tier 1 centers, all around the globe in participating research facilities, mirror parts of the raw and reconstructed data, each, such that every file exists at least twice They perform further reprocessing of the data and provide the experiments with compute resources for data analysis and event MC (Monte Carlo) simulation. The advantage of those resources is, that the group does not have to do the administration, as they are often not designed for the community, lacking necessary software or services have to be provided through different means Both the ATLAS and the CMS experiments at the LHC expect to record 5-10 times more data in the HL-LHC era by 2026 than they do today. The resource management is provided by COBalD/TARDIS as described in the following

Resource integration with COBalD and TARDIS
Setup in Aachen
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call