Abstract

The CMS 2004 Data Challenge (DC04) was devised to test several key aspects of the CMS Computing Model in three ways: by trying to sustain a 25 Hz reconstruction rate at the Tier-0; by distributing the reconstructed data to six Tier-1 Regional Centres (CNAF in Italy, FNAL in US, GridKA in Germany, IN2P3 in France, PIC in Spain, RAL in UK) and handling catalogue issues; by granting data accessibility at remote centres for analysis. Simulated events, up to the digitization step, were produced prior to the DC as input for the reconstruction in the Pre-Challenge Production (PCP04). In this paper, the model of the Tier-0 implementation used in DC04 is described, as well as the experience gained in using the newly developed data distribution management layer, which allowed CMS to successfully direct the distribution of data from Tier-0 to Tier-1 sites by loosely integrating a number of available Grid components. While developing and testing this system, CMS explored the overall functionality and limits of each component, in any of the different implementations that were deployed within DC04. The role of Tier-1's is presented and discussed, from the import of reconstructed data from Tier-0, to the archiving on to the local Mass Storage System (MSS) and the data distribution management to Tier-2's for analysis. Participating Tier-1's differed in available resources, setup and configuration. A critical evaluation of the results and performances achieved adopting different strategies in the organization and management of each Tier-1 centre to support CMS DC04 is presented.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call