Abstract
The development of the GeantV Electromagnetic (EM) physics package has evolved following two necessary paths towards code modernization. A first phase required the revision of the main electromagnetic physics models and their implementation. The main objectives were to improve their accuracy, extend them to the new high-energy frontier posed by the Future Circular Collider (FCC) programme and allow a better adaptation to a multi-particle flow. Most of the EM physics models in GeantV have been reviewed from theoretical perspective and rewritten with vector-friendly implementations, being now available in scalar mode in the alpha release. The second phase consists of a thorough investigation on the possibility to vectorise the most CPU-intensive physics code parts, such as final state sampling. We have shown the feasibility of implementing electromagnetic physics models that take advantage of SIMD/SIMT architectures, thus obtaining gains in performance. After this phase, the time has come for the GeantV project to take a step forward towards the final proof of concept. This takes shape through the testing of the full simulation chain (transport + physics + geometry) running in vectorized mode. In this paper we will present the first benchmark results obtained after vectorizing a full set of electromagnetic physics models.
Highlights
Large Hadron Collider (LHC) experiments rely heavily on Monte Carlo simulations of particle transport and interaction with detector material
The project investigates potential computational benefits of using a multiple track transportation approach instead of the classical single particle transportation flow. This is done in order to improve code and data locality in the process, and artificially enhance the data-level parallelism (DLP) of the simulation software enabling Single Instruction Multiple Data (SIMD)/SIMT execution models to combine the benefits of vectorization and multithreaded approaches
In this paper we present several benchmarks obtained from the vectorization of the main electromagnetic physics models
Summary
Large Hadron Collider (LHC) experiments rely heavily on Monte Carlo simulations of particle transport and interaction with detector material. At compilation time the code is specialized for a specific type of backend (scalar, SSE, AVX, AVX2...), allowing to enable vectorization while maintaining readability, maintainability and portability This vectorization model is at the base of the success of the vectorized geometry library developed in the framework of the GeantV project, VecGeom [10], that is integrated in the production version of Geant since release 10.2 and is being progressively adopted by the major LHC experiments. This means that the maximum speedup obtainable for a double precision division on this CPU would be of ∼ 2 when the ideal one is 4 Another factor that has to be taken into consideration is the overhead that has to be paid to gather data into SIMD vectors, in order to be ready for vector operations. The comparison with scalar executions is not always "fair" because the total execution time when running in scalar mode depends on the number of execution units of the CPU available for specific instructions, i.e. the number of instructions that can be executed simultaneously
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.