Abstract
The ATLAS experiment at the LHC is gradually transitioning from the traditional file-based processing model to dynamic workflow management at the event level with the ATLAS Event Service (AES). The AES assigns finegrained processing jobs to workers and streams out the data in quasi-real time, ensuring fully efficient utilization of all resources, including the most volatile. The next major step in this evolution is the possibility to intelligently stream the input data itself to workers. The Event Streaming Service (ESS) is now in development to asynchronously deliver only the input data required for processing when it is needed, protecting the application payload fromWAN latency without creating expensive long-term replicas. In the current prototype implementation, ESS processes run on compute nodes in parallel to the payload, reading the input event ranges remotely over the network, and replicating them in small input files that are passed to the application. In this contribution, we present the performance of the ESS prototype for different types of workflows in comparison to tasks accessing remote data directly. Based on the experience gained with the current prototype, we are now moving to the development of a server-side component of the ESS. The service can evolve progressively into a powerful Content Delivery Network-like capability for data streaming, ultimately enabling the delivery of ‘virtual data’ generated on demand.
Highlights
The ATLAS experiment [1] at the LHC [2] has accumulated more than 400 Petabytes of data processed on a globally distributed network of computing centers capable of providing about 6M CPU-hours/day
After setting up the Prefetcher and payload application, the pilot starts to retrieve from PanDA/JEDI the event ranges assigned by AES for processing; unlike conventional ATLAS Event Service jobs, the pilot passes the event ranges to the Prefetcher first
We have developed a working Event Streaming Service (ESS) prototype based on client side prefetching, which has been useful to improve the support for remote data access in the ATLAS production system
Summary
The ATLAS experiment [1] at the LHC [2] has accumulated more than 400 Petabytes of data processed on a globally distributed network of computing centers capable of providing about 6M CPU-hours/day. Are expected to grow by more than one order of magnitude with the increase in data size and complexity foreseen after the High Luminosity upgrade of the LHC around 2026, making the resource constraints much more limiting Opportunistic resources such as High Performance Computing centers, commercial clouds, volunteer computing or shared grid resources are a prime target for expanding the computing pool available to ATLAS. The AES takes advantage of the flexible workflow management of PanDA and its JEDI component [4] to implement a processing model in which workflows can be dynamically managed at high granularity, assigning individual events to worker processes, and streaming out the output data almost continuously (every 10-30 minutes) to a remote object store In this way, the AES can fill resources efficiently without having to fine tune job duration to resource lifetime, and if the worker is terminated prematurely (for example, in case of preemption), the amount of work lost is minimized. The Event Service is currently in production for Monte Carlo simulation on HPC, cloud, and grid computing platforms, and is undergoing commissioning on a wider set of resources [5], [6]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.