Abstract

The paper shows that high energy physics (FEV) and nuclear physics research are not possible without the use of large computing power and special software for data processing, modeling, and analysis. Combining the classic implementation of evolutionary algorithms with non-teacher machine learning creates a powerful approach that can effectively optimize the performance of complex applications, such as highly parallel applications for modeling particle transport in complex detectors, which depends on a large number of correlated parameters. This approach can be applied to optimize other software packages. During the work, the analysis of the software used to simulate (simulate) the passage of elementary particles through matter during the processing of experiments at CERN at the Large Andron Collider, in particular, of the new generation GeantV software package was carried out. The analysis allowed us to determine the main factors (memory size, data volume, running time, number of instructions as a fitness function and number of physical events, number of buffered physical events, number of threads and number of preparations) that affect the performance of the calculations and can be used for stochastic optimization of the simulation package performance, as well as the identification of bottlenecks in the functional model GeantV.First, a new genetic operator (NGC operator) is built, which is based on the center-center method component, and completed integration NGK operator in typical genetic algorithm is used to optimize the software package GeantV. Studies have shown that for GeantV, the overall execution time of the batches of simulations was reduced by the proposed modified genetic algorithm, which needed to be proven.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call