Abstract

The international Future Circular Collider (FCC) study aims at designing mathrm {pp}, mathrm {e^{+}e^{-}}, mathrm {e^{pm }p} colliders to be built in a new 100-km tunnel in the Geneva region. The electroweak, Higgs and top factory (FCC-ee) is designed to provide collisions at a centre-of-mass energy range between 90 (Z-pole) and 365 GeV (mathrm {t}bar{mathrm {t}}) and unprecedented integrated luminosities, producing huge amounts of data which will pose significant challenges to data processing. In this study, we discuss the needs in terms of storage and CPU for the diverse phases of the project, and the possible solutions mostly based on the models developed for HL-LHC.

Highlights

  • The FCC-ee, the first stage of the Future Circular Collider (FCC) integrated programme [1], plans to collide e+e−at various centre-of-mass energies

  • The computing needs for FCC-ee are driven by the Z run and are usually considered comfortable, in particular considering that no or negligible pile-up is expected for an e+e− collider

  • After presenting the typical workflows which we consider relevant for this study in Sect. 2, in Sects. 3 and 4 we estimate the needs in terms of storage and computing, for the diverse phases of the project, namely Monte Carlo generation, 1 Machine–detector–interface-induced backgrounds can potentially be important at FCC-ee; they are the subject of ongoing detailed studies and the current results show that they should not significantly affect the size of the data samples [3]

Read more

Summary

Introduction

The FCC-ee, the first stage of the Future Circular Collider (FCC) integrated programme [1], plans to collide e+e−at various centre-of-mass energies. FCC-ee is planned to start operation after the high-luminosity stage of LHC (HL-LHC) is completed, i.e. around 2040. The computing needs for FCC-ee are driven by the Z run and are usually considered comfortable, in particular considering that no or negligible pile-up is expected for an e+e− collider.. We assume the bulk of the studies, driven by the Physics Performance group [2], will be run during the three years 2022–2024. We need assumptions for the number of detector concepts to be evaluated. This is more complicated, and the only possible approach is to estimate the resources needed as a function of the number of detector variations to be evaluated. After presenting the typical workflows which we consider relevant for this study in Sect. 2, in Sects. 3 and 4 we estimate the needs in terms of storage and computing, for the diverse phases of the project, namely Monte Carlo generation,

30 Page 2 of 11
Typical workflows
Storage
RAW event sizes
AOD event sizes
RAW data and the event format for full simulation
AOD data samples
30 Page 6 of 11
Monte Carlo generation
Detector simulation
30 Page 8 of 11
Detector parameterisation
Analysis
Ways ahead
Improving the parameterised simulation
Minimal needs in terms of simulation statistics
Conclusions and outlook
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.