Abstract

The High Energy Physics (HEP) Experiments at Particle Colliders need complex computing infrastructures in order to extract knowledge from the large datasets collected, with over 1 Exabyte of data stored by the experiments by now. The computing needs from the top world machine, the Large Hadron Collider (LHC) at CERN/Geneva, have seeded the realisation of the large scale GRID R&D and deployment efforts during the first decade of 2000, a posteriori proven to be adequate for the LHC data processing. The upcoming upgrade of the LHC collider, called High Luminosity LHC (HL-LHC) is foreseen to require an increase in computing resources by a factor between 10x and 100x, currently expected to be beyond the scalability of the existing distributed infrastructure. Current lines of R&D are presented and discussed. With the start of big scientific endeavours with a computing complexity similar to HL-LHC (SKA, CTA, Dune, ...) they are expected to be valid for science fields outside HEP.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.