Abstract
The CERN IT Storage group operates multiple distributed storage systems to support all CERN data storage requirements. The storage and distribution of physics data generated by LHC and non-LHC experiments is one of the biggest challenges the group has to take on during LHC Run-3.EOS [1], the CERN distributed disk storage system is playing a key role in LHC data-taking. During the first ten months of 2022, more than 440PB have been written by the experiments and 2.9EB have been read out. The data storage requirements of LHC Run-3 are higher than what was previously delivered. The storage operations team has started investigating multiple areas to upgrade and optimize the current storage resources. A new, dedicated and redundant EOS infrastructure based on 100Gbit servers was installed, commissioned and deployed for the ALICE Online and Offline (O2) project. This cluster can sustain high-throughput data transfer between the ALICE Event Processing Nodes (EPN) and the CERN’s data center.This paper will present the architecture, techniques and workflows in place allowing EOS to deliver fast, reliable and scalable data storage to meet experiment needs during LHC Run-3 and beyond.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.