Abstract

At present we have 2 main mass storage systems for archiving HEP experimental data at the INFN-CNAF Tier-1: a HSM software system (CASTOR) and about 250 TB of different storage devices over SAN. This paper briefly describes our hardware and software environment, and summarizes the technical solutions adopted in order to obtain better availability and high data throughput from the front-end disk servers. In fact, our computing resources, consisting of farms of dual processor nodes (currently about 1000 nodes providing 1300 KspecInt2000), need to access the data through a fast and reliable I/O infrastructure. A valid solution for achieving large I/O throughputs is nowadays provided by parallel file systems. In the last part of this paper some results of detailed tests we performed with GPFS and Lustre over SAN are reported.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.