Abstract
The distributed computing infrastructure of the ATLAS Experiment includes over 170 sites and executes up to 3 million computing jobs daily. PanDA (Production and Distributed Analysis) is the Workload Management System responsible for task and job execution; its key components are the broker and job scheduler that define the mapping of computing jobs to the resources. The optimization of this mapping is crucial for handling the expected computational payloads during the HL-LHC era. Considering the heterogeneity and the distributed structure of the Worldwide LHC Computing Grid (WLCG) infrastructure that provides computing resources for analyzing the data, there is a need for specific approaches for evaluating computing resources according to their ability to process different types of workflows. This evaluation can potentially enhance the efficiency of the Grid by optimally distributing different types of payloads in heterogeneous computing environments. To tackle this challenge, this research proposes a method for evaluating WLCG resources regarding their ability to process user analysis payloads. This evaluation is based on leveraging available information about job execution on PanDA queues within the ATLAS computing environment.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.