Abstract

The high energy physics experiment CDF, located in the anti-proton–proton collider at Fermilab, will write data in Run 2 at a rate of 20 MByte/s, twenty times the rate of Run 1. The offline production system must be able to handle this rate. Components of that system include a large PC farm, I/O systems to read/write data to and from mass storage, and a system to split the reconstructed data into physics streams which are required for analysis. All of the components must work together seamlessly to ensure the necessary throughput. A description will be given of the overall hardware and software design for the system. A small prototype farm has been used for about one year to study performance, to test software designs and for the first Mock Data Challenge. Results from the tests and experience from the first Mock Data Challenge will be discussed. The hardware for the first production farm is in place and will be used for the second Mock Data Challenge. Finally, the possible scaling of the system to handle larger rates foreseen later in Run 2 will be described.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.