Abstract

The ATLAS experiment at CERN’s LHC stores detector and simulation data in raw and derived data formats across more than 150 Grid sites world-wide, currently in total about 200PB on disk and 250PB on tape. Data have different access characteristics due to various computational workflows, and can be accessed from different media, such as remote I/O, disk cache on hard disk drives or SSDs. Also, larger data centers provide the majority of offline storage capability via tape systems. For the HighLuminosity LHC (HL-LHC), the estimated data storage requirements are several factors bigger than the present forecast of available resources, based on a flat budget assumption. On the computing side, ATLAS Distributed Computing was very successful in the last years with high performance and high throughput computing integration and in using opportunistic computing resources for the Monte Carlo simulation. On the other hand, equivalent opportunistic storage does not exist. ATLAS started the Data Carousel project to increase the usage of less expensive storage, i.e. tapes or even commercial storage, so it is not limited to tape technologies exclusively. Data Carousel orchestrates data processing between workload management, data management, and storage services with the bulk data resident on offline storage. The processing is executed by staging and promptly processing a sliding window of inputs onto faster buffer storage, such that only a small percentage of input data are available at any one time. With this project, we aim to demonstrate that this is the natural way to dramatically reduce our storage cost. The first phase of the project was started in the fall of 2018 and was related to I/O tests of the sites archiving systems. Phase II now requires a tight integration of the workload and data management systems. Additionally, the Data Carousel studies the feasibility to run multiple computing workflows from tape. The project is progressing very well and the results presented in this document will be used before the LHC Run 3.

Highlights

  • By Data Carousel, we mean an orchestration between workflow management (WFM), distributed data management (DDM/Rucio [5]) and tape services whereby a bulk production campaign with its inputs resident on tape is executed by staging and promptly processing a sliding window to disk buffer, such that only a small fraction of inputs are pinned on disk at any one time

  • ProdSys2 creates subscription rules in Rucio, which submits the requests to File Transfer Service (FTS)

  • Instead of chasing a moving target, we currently focus on the tape staging process itself, and try to improve tape recall efficiency in our workflow, which is defined by the ratio of the throughput delivered to end users over the vendor-specified nominal throughput of the tape system

Read more

Summary

Introduction

The basic considerations for the ATLAS experiment [3] to address this storage challenge at the HL-LHC are:. “Opportunistic storage” does not exist for LHC experiments; Format size reduction and data compression are both long-term goals and these will require significant efforts from the software and distributed computing teams; The increased usage of cold, less expensive storage (currently tape) relative to disk is a natural way to dramatically reduce our storage costs. Similar ideas have been explored and successfully implemented in some other scientific communities, like the Relativistic Heavy Ion Collider (RHIC) experiment [4]. ATLAS started the Data Carousel R&D project in June 2018, to study the feasibility to get inputs from tape directly, for various ATLAS workflows, such as derivation production and RAW data reprocessing

Data Carousel
ATLAS staging process
Objectives
Three phases
Phase I: results and lessons learned
Phase II
Integrate tape into ATLAS workflow
Site staging profile
Smart writing for efficient reading
Release of tasks and jobs
Future plans
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call