Abstract

The distributed computing infrastructure of the ATLAS Experiment includes over 170 sites and executes up to 3 million computing jobs daily. PanDA (Production and Distributed Analysis) is the Workload Management System responsible for task and job execution; its key components are the broker and job scheduler that define the mapping of computing jobs to the resources. The optimization of this mapping is crucial for handling the expected computational payloads during the HL-LHC era. Considering the heterogeneity and the distributed structure of the Worldwide LHC Computing Grid (WLCG) infrastructure that provides computing resources for analyzing the data, there is a need for specific approaches for evaluating computing resources according to their ability to process different types of workflows. This evaluation can potentially enhance the efficiency of the Grid by optimally distributing different types of payloads in heterogeneous computing environments. To tackle this challenge, this research proposes a method for evaluating WLCG resources regarding their ability to process user analysis payloads. This evaluation is based on leveraging available information about job execution on PanDA queues within the ATLAS computing environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call