Abstract

The future of data management for LHC at CERN brings new requirements for scalability and a change of scheduling and data handling compared to the HSM mass storage system in use today. A forecast for disk based storage volume at CERN in 2015 is on the Exabyte scale with hundreds of millions of files.A new CERN storage architecture is represented as a storage cluster with an analysis, archive and tape pool with container based data movements and decoupled namespaces.Main assets of a new system is high-availability and life cycle management for large storage installations. Today this is one of the major issues at the CERN computer centre with more than 1,000 disk servers and continuous hardware replacement. Another key point is distributed meta data handling with in-memory caching and persistent key-value stores to reduce latencies and operational complexity.Focus of this paper will be on the analysis pool implementation providing low-latency, nonsequential file access and a hierarchical namespace. A summary of performance indicators and first operational experiences will be reported.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.